Wednesday, December 17, 2008

Operating Systems: File Systems

File systems are an integral part of any operating systems with the capacity for long term storage. There are two distinct parts of a file system, the mechanism for storing files and the directory structure into which they are organised. In mordern operating systems where it is possibe for several user to access the same files simultaneously it has also become necessary for such features as access control and different forms of file protection to be implemented.

A file is a collection of binary data. A file could represent a program, a document or in some cases part of the file system itself. In modern computing it is quite common for their to be several different storage devices attached to the same computer. A common data structure such as a file system allows the computer to access many different storage devices in the same way, for example, when you look at the contents of a hard drive or a cd you view it through the same interface even though they are completely different mediums with data mapped on them in completely different ways. Files can have very different data structures within them but can all be accessed by the same methods built into the file system. The arrangment of data within the file is then decided by the program creating it. The file systems also stores a number of attributes for the files within it.

All files have a name by which they can be accessed by the user. In most modern file systems the name consists of of three parts, its unique name, a period and an extension. For example the file 'bob.jpg' is uniquely identified by the first word 'bob', the extension jpg indicates that it is a jpeg image file. The file extension allows the operating system to decide what to do with the file if someone tries to open it. The operating system maintains a list of file extension associations. Should a user try to access 'bob.jpg' then it would most likely be opened in whatever the systems default image viewer is.

The system also stores the location of a file. In some file systems files can only be stored as one contigious block. This has simplifies storage and access to the file as the system then only needs to know where the file begins on the disk and how large it is. It does however lead to complications if the file is to be extended or removed as there may not be enough space available to fit the larger version of the file. Most modern file systems overcome this problem by using linked file allocation. This allows the file to be stored in any number of segments. The file system then has to store where every block of the file is and how large they are. This greatly simplifies file space allocation but is slower than contigious allocation as it is possible for the file to be spread out all over the disk. Modern oparating systems overome this flaw by providing a disk defragmenter. This is a utility that rearranges all the files on the disk so that thay are all in contigious blocks.

Information about the files protection is also integrated into the file system. Protection can range from the simple systems implemented in the FAT system of early windows where files could be marked as read-only or hidden to the more secure systems implemented in NTFS where the file system administrator can set up separate read and write access rights for different users or user groups. Although file protection adds a great deal of complexity and potential difficulties it is essential in an enviroment where many different computers or user can have access to the same drives via a network or time shared system such as raptor.

Some file systems also store data about which user created a file and at what time they created it. Although this is not essential to the running of the file system it is useful to the users of the system.

In order for a file system to function properly they need a number of defined operations for creating, opening and editing a file. Almost all file systems provide the same basic set of methods for manipulating files.

A file system must be able to create a file. To do this there must be enough space left on the drive to fit the file. There must also be no other file in the directory it is to be placed with the same name. Once the file is created the system will make a record of all the attributes noted above.

Once a file has been created we may need to edit it. This may be simply appending some data to the end of it or removing or replacing data already stored within it. When doing this the system keeps a write pointer marking where the next write oparation to the file should take place.

In order for a file to be useful it must of course be readable. To do this all you need to know the name and path of the file. From this the file system can ascertain where on the drive the file is stored. While reading a file the system keeps a read pointer. This stores which part of the drive is to be read next.

In some cases it is not possible to simply read all of the file into memory. File systems also allow you to reposition the read pointer within a file. To perform this operation the system needs to know how far into the file you want the read pointer to jump. An example of where this would be useful is a database system. When a query is made on the database it is obviously ineficient to read the whole file up to the point where the reuired data is, instead the application managing the database would determine where in the file the required bit of data is and jump to it. This operation is often known as a file seek.

File systems also allow you to delete files. To do this it needs to know the name and path of the file. To delete a file the systems simply removes its entry from the directory structure and adds all the space it previously occupied to the free space list (or whatever other free space management system it uses).

These are the most basic operations required by a file system to function properly. They are present in all modern computer file systems but the way they function may vary. For example, to perform the delete file operation in a modern file system like NTFS that has file protection built into it would be more complicated than the same operation in an older file system like FAT. Both systems would first check to see whether the file was in use before continuing, NTFS would then have to check whether the user currently deleting the file has permission to do so. Some file systems also allow multiple people to open the same file simultaneously and have to decide whether users have permission to write a file back to the disk if other users currently have it open. If two users have read and write permission to file should one be allowed to overwrite it while the other still has it open? Or if one user has read-write permission and another only has read permission on a file should the user with write permission be allowed to overwrite it if theres no chance of the other user also trying to do so?

Different file systems also support different access methods. The simplest method of accessing information in a file is sequential access. This is where the information in a file is accessed from the beginning one record at a time. To change the position in a file it can be rewound or forwarded a number of records or reset to the beginning of the file. This access method is based on file storage systems for tape drive but works as well on sequential access devices (like mordern DAT tape drives) as it does on random-access ones (like hard drives). Although this method is very simple in its operation and ideally suited for certain tasks such as playing media it is very inneficient for more complex tasks such as database management. A more modern approach that better facilitates reading tasks that arent likely to be sequential is direct access. direct access allows records to be read or written over in any order the application requires. This method of allowing any part of the file to be read in any order is better suited to modern hard drives as they too allow any part of the drive to be read in any order with little reduction in transfer rate. Direct access is better suited to to most applications than sequential access as it is designed around the most common storage medium in use today as opposed to one that isnt used very much anymore except for large offline back-ups. Given the way direct access works it is also possible to build other access methods on top of direct access such as sequential access or creating an index of all the records of the file speeding to speed up finding data in a file.

On top of storing and managing files on a drive the file system also maintains a system of directories in which the files are referenced. Modern hard drives store hundreds of gigabytes. The file system helps organise this data by dividing it up into directories. A directory can contain files or more directories. Like files there are several basic operation that a file system needs to a be able to perform on its directory structure to function properly.

It needs to be able to create a file. This is also covered by the overview of peration on a file but as well as creating the file it needs to be added to the directory structure.

When a file is deleted the space taken up by the file needs to be marked as free space. The file itself also needs to be removed from the directory structure.

Files may need to be renamed. This requires an alteration to the directory structure but the file itself remains un-changed.

List a directory. In order to use the disk properly the user will require to know whats in all the diretories stored on it. On top of this the user needs to be able to browse through the directories on the hard drive.

Since the first directory structures were designed they have gone through several large evolutions. Before directory structures were applied to file systems all files were stored on the same level. This is basically a system with one directory in which all the files are kept. The next advancement on this which would be considered the first directory structure is the two level directory. In this There is a singe list of directories which are all on the same level. The files are then stored in these directories. This allows different users and applications to store there files separately. After this came the first directory structures as we know them today, directory trees. Tree structure directories improves on two level directories by allowing directories as well as files to be stored in directories. All modern file systems use tree structore directories, but many have additional features such as security built on top of them.

Protection can be implemented in many ways. Some file systems allow you to have password protected directories. In this system. The file system wont allow you to access a directory before it is given a username and password for it. Others extend this system by given different users or groups access permissions. The operating system requires the user to log in before using the computer and then restrict their access to areas they dont have permission for. The system used by the computer science department for storage space and coursework submission on raptor is a good example of this. In a file system like NTFS all type of storage space, network access and use of device such as printers can be controlled in this way. Other types of access control can also be implemented outside of the file system. For example applications such as win zip allow you to password protect files.

There are many different file systems currently available to us on many different platforms and depending on the type of application and size of drive different situations suit different file system. If you were to design a file system for a tape backup system then a sequential access method would be better suited than a direct access method given the constraints of the hardware. Also if you had a small hard drive on a home computer then there would be no real advantage of using a more complex file system with features such as protection as it isn't likely to be needed. If i were to design a file system for a 10 gigabyte drive i would use linked allocation over contigious to make the most efficient use the drive space and limit the time needed to maintain the drive. I would also design a direct access method over a sequential access one to make the most use of the strengths of the hardware. The directory structure would be tree based to allow better organisation of information on the drive and would allow for acyclic directories to make it easier for several users to work on the same project. It would also have a file protection system that allowed for different access rights for different groups of users and password protection on directories and individual files.Several file systems that already implement the features ive decribed above as ideal for a 10gig hard drive are currently available, these include NTFS for the Windows NT and XP operating systems and ext2 which is used in linux.
http://www.articlecity.com

video
http://www.youtube.com/watch?v=iHYiG-H-bKw

Get FREE Internet Access on Your Computer

Will mobile internet ever be possible? Mobiles phone service providers give you access to the internet or at least that is the claimed. However, there are issues of boundaries, availability of good signals, telephone memory which affects down load speeds and the most prohibitive aspect is the cost.

You can have access to the internet via your computer on the move using s data card with a starting price of nearly £50 per month for limited access. Unlimited access via data card can cost as much as £150 per month. WEBAROO soon to be on the market is about to change all that. This device in conjunction with a computer or other mobile tool for the first time gives you the ability to have "Web on your computer on the move". This clever little tool does Google searches on the underground, on a plan, on top of a mountain or even in the wildest part of the Amazon jungles. How does Webaroo add the power of internet to your computer? Cutting edge compression technology which is the core of this device saves the internet on the hard drive of your computer or on your memory chip of your computer. The whole internet is compressed into 2 gigabytes of memory and it is thereafter yours to take it with you to where you want to take your computer. What are the benefits of this technology? There are no wifi issues to be concerned about. There are no costs for the use of internet on the move. You simply have free internet at your finger tips on your computer to use as you please. You can access useful information like maps, encyclopedia and even news stories. Short comings of Webaroo? You don't have access to the entire web. Your access to the web is limited to the internet as seen by Google. The amount of information being help does depend on the memory of your computer. Depending on the popularity of your search term, the search results are limited to the first few pages as opposed to the entire search results for your chosen term. Updates are only possible when the machine is hooked up to the internet the next time. Finally, Webaroo is not just another internet device or tool to fit neatly in a niche that has remained available for a long while. It is innovation itself. Webaroo; for the first time, has made it possible for people to have internet on their computer and in their pocket at its very first attempt. And, all that technology, information and system for free on your computer and on the move. Because of this achievement, Webaroo is therefore a very serious contender on the market with a myriad of possibilities. Webaroo is very good news for the market because it will encourage other suppliers in the market to come up with even more innovative products. Cheaper internet access via your computer on the move is really needed and I hope that Webaroo makes big inroads into that area by its very presence.

Did you find this article useful? For more useful tips & hints, Points to ponder and keep in mind, techniques & insights pertaining to Google Ad sense, Do please browse for more information at our website :-

http://www.reprintarticlesite.com
http://computertips.reprintarticlesite.com

http://www.articlebasement.com
video
http://www.youtube.com/watch?v=3lNqqryPBNU

Increase Server Uptime with Automatic Defrag

In today’s computing environment, uptime is a critical factor. This is because information technology has slowly crept up from the supporting position it had years back—suddenly we look up and IT is not merely enhancing business operations, it in itself has become business operations. When a customer is placing an order, it is directly or nearly directly with a computer. When a shipment is received by a customer, the receiving signature is scanned and receipt is immediately confirmed through a computer. Messaging, quotes, accounting, inventory—they’re all done straight through automated systems that cannot afford downtime.

Now that so many servers are “front and center” and must run 24X7, time to perform tasks such as anti-virus, backups and defrag has become incredibly scarce. So scarce, in fact, that some sites put off defrag until performance is absolutely intolerable and the only choice is to bring a system down and run a manual or scheduled defragmentation.

Disk fragmentation occurs when a file is broken up into pieces to fit on the disk. Because files are constantly being written, deleted and resized, fragmentation is a natural occurrence. When a file is spread out over several locations, it takes longer to read and write. But the effects of fragmentation are far more widespread: Slow performance, long boot-times, random crashes and freeze-ups — even a complete inability to boot up at all. Many users blame these problems on the operating systems, when disk fragmentation is often the real culprit.

In the context of administering computer systems, defragmentation is a process that reduces the amount of fragmentation in file systems. It does this by physically reorganizing the contents of the disk to store the pieces of each file close together and contiguously. It also attempts to create larger regions of free space using compaction to impede the return of fragmentation. Some defragmenters also try to keep smaller files within a single directory together, as they are often accessed in sequence.

Jim Bernal, Senior Network Engineer with Howe Barnes Hoefer & Arnett in Chicago, Illinois, found out just how bad it can get. “We constantly had servers running slowly and getting really fragmented from constant file access and over time file access would almost halt or take minutes to access a file,” Bernal said. “We also had problems with users logging in with domain controllers sometimes rejecting users because of timeouts in communicating with our DNS servers.”

Like many sites today, Bernal’s company was running the whole gamut of services and applications, including Microsoft Exchange, SQL, fileservers, domain controllers, print servers, Terminal Services, Live Communications servers, accounting software, SharePoint and Virtual Services servers for virtualization.

But also like many sites, Bernal’s fortunately discovered Disk Defragmentation. Only a high quality disk defragmenter utilizes only otherwise-idle system resources, so defrag takes place whenever and wherever possible. There is never a negative performance hit from defragmentation, and no scheduling is ever required. Performance is always maximized, so systems can remain up and running.

Today, taking your systems down is the same as taking part of your business down. Therefore it is always well said that its better to take care of your servers & systems before the problem
becomes incurable. So go ahead & download your copy of a free Disk Defragmentation Software

http://www.articlefeeder.com

3 Ways of Repairing the Windows Blue Screen

The most frustrating computer error of all, the blue screen that pops up out of no where and almost always when you're in the middle of something very important. The screen appears listing some strange cryptic message of numbers and letters about something going wrong. For us average computer users we have no idea what the heck is happening, I hope this article can help.

Once you've read through this article and tried each of the steps I hope that you will be able to cure your windows blue screen blues. Each of these tasks you can do on your own, and I suggest you give each one of them a try before you send your computer into a repair shop.

NOTE: Before you attempt any of these steps you should ensure you have a Windows restore point set, and that all of your important files and documents are backed up externally some how. This will ensure if anything went wrong you won't be at a total loss.

One: Scan your Windows Registry for Errors

Few computer users know what the Windows registry is. It's a grouping of files and information that really keeps Windows running. Every piece of software that is or has been installed on your PC makes a modification to the Registry system, and small errors can cause big problems.

If you don't have a registry cleaner I suggest you try Regcure a well known registry repair solution. They offer a free scan tool so you can scan your entire PC for errors in the registry before you have to spend a dime.

Two: Virus and Spyware Scanning

It's rare that a virus or piece of adware will envoke the blue screen error but it's not impossible. It's worth it to take the time to run a virus or adware scanner on your PC to look for these items and correct them if they exist.

If you don't have a piece of software to do either of these scans you can get a free virus scanner from Grisoft (AVG antivirus) and a free spyware/adware scanner from Lavasoft (Adaware).

Three: Un-install and Re-install Program(s)

When the blue screen error appears does it happen with the same program(s) consistently? If so it's possible that there is something corrupt with the file(s) of that particular program. The good news is this is one of the easier items to repair so long as you have the program(s) backed up.

Visit your control panel, then the Add/Remove programs icon. There you'll find a listing of all the programs on your PC. You can select programs to remove from this list. Once you've removed it you can then do a fresh install of that program.

NOTE: before you rush off to remove any programs ensure you do have the ability to re-install the program, and that you backup any of your personal files associated with that program as well.

What's Next if These Steps Don't Work?

There of course is a chance that none of this will improve the performance of your computer and you'll still be getting the blue screen error. If this is the case then you have two options.

There's always a chance that none of these items will fix the blue screen errors you're getting on. If this is the case there are two other options you can take.

1.) You can take your PC into a repair shop and let them take a go at it. Be prepared to spend a few bucks, and ensure you have everything backed up because there's a good chance they'll reformat everything.

2.) You can restore windows and format your hard drive yourself rather then paying a tech shop to do this. Again be sure you have everything backed up of course, because once you reformat and restore your Windows install you'll likely lose everything.


http://www.goarticles.com

video
http://www.youtube.com/watch?v=HFndVORlXGE

Discover the Easy Way to Copy Wii Games and to Make Backups Anytime You Want

The Nintendo Wii is an incredibly popular system that offers a unique method of gameplay. It s no wonder so many people want to make copies of their own Wii games as backups to keep them from being damaged.

Right off the bat we want to make one thing clear we are NOT encouraging you to copy games that you do not own. The purpose is to help you make backups of your existing games. Making backups of your own discs is completely legal.

While these games come on a disc that looks a lot like your burnable CDs, they can t just be copied the same way you d rip a music CD. It s a little more complicated than that. Here s a guide to tell you the basics of copying Wii games, as well as the circumstances when you should and shouldn t do it.

Your usual burning software can t copy your Wii games the same way you d burn a CD. Roxio, Nero, and other popular burning programs just can t copy these disks. That s because there s encoded copy protection on the disk that keeps your computer from being able to make sense of it.

I remember trying this on my computer a few years ago. I figured if Roxio could burn a CD, it could burn a game right?

My efforts just left me frustrated as obviously it wasn t working for me.

Of course, like most copy protection, it didn t take long for someone to come up with a workaround. There are programs available that can read through the copy protection that comes on a Wii disc.

Once you ve installed this kind of software, making a copy of the game is relatively easy your computer is suddenly able to understand the data that was unreadable before. This is true for games on just about any system, including the PS3, Xbox 360, and the Wii.

By having the one program, you are able to make copies of games for all your systems if you own more than just the Wii. You don t need a seperate program for each one.

Even some PC games use encryption that can be broken by these programs. Should you own more than one game system, this software can help you produce backup copies for all of them, not just your Wii.

After you ve found and installed a program to help you break the encryption, you ll need to copy the data on your game as a disk image. Once you have this disk image on your hard drive, all you need to do is put a blank DVD or CD into the drive, then instruct the program to burn the image to it.

The copying programs that are now available make it possible to copy Wii games without having to do anything but click a few buttons. Copying Wii games is a great way to make a backup copy of any game you want. It never hurts to have a backup in the case that the original gets lost or damaged. Enjoy!
Author Resource:- Go to http://www.burnyourgames.com to find out how you can get a free trial of the most popular game-copying software.
Article From Article Press

http://www.articlepress.org

video
http://www.youtube.com/watch?v=2OiXV0h8Jec

Data Loss and Data Recovery: An Overview

Data stored on computers can get lost in several ways. To recover from such a situation the main option would be to restore from backed up data.Alternatively calling on the services of a specialist data recovery company. Reconstructing all the lost data manually is usually not a viable option.

Reconstruction will involve:
  • Locating the original documents or specifications,

  • Huge amounts of data entry, and

  • Serious disruption to the business during the reconstruction period



    • The cost of such an exercise can seem very expensive, but without it's being done a company's viability can be put at risk.

      This is the context in which data recovery becomes a significant issue. Before we look at data recovery, let us take a brief look at how data gets lost.

      How Does Data Get Lost?

      Data stored on computer media can get lost in different ways, including:

      • User actions such as deleting data unintentionally, carelessly, or even deliberately for malicious or financial gain

      • Loss of removable media such as CD/DVD on which valuable data were stored (see UK Revenue and Customs, DVLA, Banks and Building Societies etc ) /li>
      • Administration and other human errors leading to loss of valuable data (e.g leaving primary data on a laptop not backed up which can be at risk of theft.)

      • Power supply problems leading to system malfunctions that cause data damage

      • System crashes leading to incomplete sessions and loss of data

      • Hardware problems causing incorrect or incomplete write operations

      • Corrupted media or databases making recorded data unreadable

      • Natural disasters like fire, flood or earthquake destroying the computer facilities

      • Data loss caused by malicious external agents like viruses and hackers

      • Pilferage of physical media or equipment such as laptop computers



      Studies indicate that almost three quarters of data loss is caused by hardware failure and human errors. It is in these cases that data recovery becomes a viable option.

      Data recovery is not possible where the media storing the data has been lost and there are no backups available.

      Preventing Data Loss

      USER EDUCATION: Training your staff in various aspects of computer operations can minimize data losses, particularly losses caused through human errors, which are a major cause of data loss. Staff can be trained to take the necessary precautions, such as backing up, antivirus checking, working through firewalls, but there should always be someone in charge the security of data.

      BACKUP: The impact of data loss can be minimized through well-implemented data backup procedures. The qualification "well-implemented" is important. Often, data backup is done in a routine manner without checking whether a full data recovery is possible in case original data is lost.

      Data backup might not be up to date. Data in backup media can prove irrecoverable owing to improper backing up or media corruption. It is important to make regular reviews and tests of both back up procedures and backed up data to ensure that the data is indeed recoverable if it becomes necessary.

      Storing backed up data in a location away from the primary data location can help in such cases as a fire / flood or other natural disaster.

      ANTI VIRUS & FIREWALL SOFTWARE: Anti virus software, if kept up to date, can help guard against virus attacks. Firewalls can stop most hackers from hacking into your systems.

      REDUNDANT MEDIA AND WRITE OPERATIONS: Data is written to more than one media device or location so that if one of them fails, the other can be used to repair the damage.


      Data Recovery

      Where a proper backup system is in place, lost data can be recovered to a greater or lesser extent using the backups. The speed of such recovery will depend on the kind of backup media used. Where the backup media is online, such as in RAID systems, or Web-based third party facilities, data recovery can be quick.

      At the other extreme, where data is stored on magnetic tapes at locations away from the primary site, data recovery could be a time-consuming exercise and may need to be shipped or couriered to the main location.

      Where backups are not available or unsatisfactory, you have to depend on data recovery software or data recovery specialists. Data recovery software might be able to recover deleted files and repair corrupted directories. However, more serious data losses are likely to need the services of data recovery companies with expertise and specialized facilities, like Class 100 (Iso 5) Clean Rooms.

      Data recovery companies extract the raw image from the disk and might try different techniques such as file system recovery, replacing damaged hardware components with compatible healthy ones, and reprogramming the firmware. These are technically involved and often delicate procedures requiring considerations to dust and electrostatic discharge (esd) in order to recover as much of the data as possible.

      Conclusion

      Data loss can occur owing to numerous factors and is something that has to be planned against. Training your computer personnel, proper back up procedures and use of anti-virus and firewall software are some of the measures you can take to guard against data loss.

      If these measures are absent or prove inadequate, you may have to approach
      external data recovery specialists, with the expertise and facilities to recover lost
      http://www.softensive.com








video

http://www.youtube.com/watch?v=1_sNdPoQdcM

Partial Page Rendering Using Hidden IFrame

Partial Page Rendering Using Hidden IFrame
Executive Summary:

Partial-page rendering removes the need for the whole web page to be refreshed as the result of a postback. Instead, only individual regions of the page that have changed are updated. As a result, users do not see the whole page reload with every postback, which makes user interaction with the Web page more seamless.
Developers that want to add such behaviors to their web pages are often faced with a difficult decision. All of these actions can be implemented using a very simple solution: by refreshing the entire page in response to the user interaction. However this solution is easy but not always desirable. The full page refresh can be slow, giving the user the impression that the application is unresponsive. Another option is to implement such actions using JavaScript (or other client-side scripting technologies). This results in faster response times, at the expense of more complex, less portable code. JavaScript may be a good choice for simple actions, such as updating an image. However, for more complicated actions, such as scrolling through data in a table, writing custom JavaScript code can be a very challenging undertaking.
This paper provides a solution which avoids the drawbacks of the full page refresh and custom JavaScript solutions. In this paper partial page rendering functionality provides the ability to re-render a limited portion of a page. As in the full page render solution, partial page rendering sends a request back to the application on the middle-tier to fetch the new contents. However, when partial page rendering is used to update the page, only the modified contents are sent back to the browser. This paper gives the solution using a hidden IFrame and simple JavaScript to merge the new contents back into the web page. The end result is that the page is updated without custom JavaScript code, and without the loss of context that typically occurs with a full page refresh.

Introduction:

Web pages typically support a variety of actions, such as entering and submitting form data and navigating to different pages. Many web pages also support another type of action, which is to allow the user to make modifications to the contents of the web page itself without actually navigating to a different page. Some examples of such actions include.
Clicking on a link could update an image on the same page. For example, an automobile configuration application might update an image of a car as the user chooses different options, such as the preferred color.
Selecting an item from a choice box might result in modifications to other fields on the same page. For example, selecting a car make might update the set of available car models that are displayed.
Clicking a link or selecting an item from a choice could be used to scroll to a new page of data in a table. Clicking a button in a table might add a new row to the table.
All of these actions are similar in that they result in the same page being re-rendered in a slightly different state. Ideally, these changes should be implemented as seemlessly as possible, so that the user does not experience a loss of context which could distract from the task at hand.
Partial page rendering can be implemented with very simple solution using a hidden IFrame and minimal JavaScript. Any part of the page can be partially rendered with using a div or table tags in HTML.

Page Elements That May Change During PPR:

•Re-Render Data: The same fields are redrawn but their data is updated. Examples include the Refresh Data action button, or recalculate totals in a table.
•Re-render Dependent Fields: Fields may be added, removed, or change sequence, and data may be updated. Examples include the Country choice list, which may display different address fields, and toggling between Simple and Advanced Search.
•Hide/Show Content: Both fields and data toggle in and out of view.

Page Elements That Do Not Change During PPR:

Some page elements are always associated with a page, regardless of the content displayed on the page.

As a general rule of thumb, elements above the page title (except message boxes) remain constant and do not change position, whereas elements in footer constant but may move up or down the page to accommodate changes to page content. The following elements never change when PPR is initiated:
• Branding
• Global buttons
• Tabs, Horizontal Navigation, SubTabs
• Locator elements: Breadcrumbs, Train, Next/Back Locator
• Quick links
• Page titles (first level header)
• Page footer
• Separator lines between the Tabs and Page Title
In most cases the following elements will also not change, other than moving up or down the page to accommodate changed elements. Nevertheless, in certain cases actions on the page may require them to be redrawn:
• Side Navigation, unless it contains a Hide/Show control.
• Subtabs
• Contextual information
• Page-level action/navigation buttons
• Page-level Instruction text
• Page-level Page stamps
• Page-level Key Notation
In all above scenarios this solution can be used to achieve the good performance and user interaction of the web pages.

Contexts in Which PPR Should Not Be Used:

When PPR is implemented correctly, it significantly improves application performance. When performance improvement is not possible with PPR, it should not be implemented, thus avoiding unnecessary code bloat, PPR can’t be used when navigating to another page (with a different title).

Partial Page Rendering Solution:

Solution provided to the Partial Page Rendering using simple hidden iframe and JavaScript, this can be used as a generalized solution to all the Partial Page Rendering scenarios.
Below is the main html (Table 1.1), which will have two buttons one is to show a simple table which will be generated by the server, and another button to remove the table.

[html]
[head]
[title] Main Document [/title]
[script language="JavaScript"]
[!--
function showTable() {
hiframe.location="./table.htm";
}
function removeTable() {
document.getElementById("tableId").innerHTML="";
}
//--]
[/script]
[/head]
[body]
[iframe id="hiframe" style="visibility:hidden;display:none"][/iframe]
[table]
[tr]
[td]Table::[/td]
[td][/td]
[/tr]
[tr]
[td colspan="2"][div id="tableId"][/div][/td]
[/tr]
[tr]
[td][input type="button" value="Show Table" onclick="showTable()"][/td]
[td][input type="button" value="Remove Table" onclick="removeTable()"][/td]
[/tr]
[/table]
[/body]
[/html]

Table 1.1

[iframe id="hiframe" style="visibility:hidden;display:none"][/iframe]
This iframe tag is used as target to the Partial Page Rendering Request.
The tag [input type="button" value="Show Table" onclick="showTable()"] gives the user action to get the contents of a table from the server, in this solution sample html is provided to render the table, which supposed to be generated by the server.
The tag [input type="button" value="Remove Table" onclick="removeTable()"] gives the user to remove the table from the user interface.
The JavaScript
function showTable() {
hiframe.location="./table.htm";
}
Is used to get the contents from the server, the line hiframe.location="./table.htm"; sends the GET request to the server, and as a response iframe gets the HTML.
If the requirement insists to send a POST request for Partial Page rendering Response, that can be achieved by setting the html form element target attribute as the name of hidden iframe.

The code for the post request looks like
[form method=”post” action=”/myaction” target=”hiframe”]

Partial Page Rendering Server Response:

Table 1.2 shows the sample response from the server for Partial Page Rendering. This response has the JavaScript code to transfer the HTML from hidden iframe to main page.


[html]
[head]
[script language="JavaScript"]
[!--
function iframeLoad() {
parent.document.getElementById("tableId").innerHTML = document.getElementById("tableId").innerHTML;
}
//--]
[/script]
[/head]
[body onload="iframeLoad()"]
[div id="tableId"]
[table]
[tr]
[td]1[/td]
[td]One[/td]
[/tr]
[tr]
[td]2[/td]
[td]Two[/td]
[/tr]
[/table]
[/div]
[/body]
[/html]

Table 1.2
The tag [div id="tableId"] encloses the content to transfer from hidden iframe to main page.
[table]
[tr]
[td]1[/td]
[td]One[/td]
[/tr]
[tr]
[td]2[/td]
[td]Two[/td]
[/tr]
[/table]
This is the content to show the table to user.

The code [body onload="iframeLoad()"] is used for triggering the action to transfer the content.

function iframeLoad() {
parent.document.getElementById("tableId").innerHTML = document.getElementById("tableId").innerHTML;
}

This JavaScript function does the transferring data from the hidden iframe to main page.
parent.document.getElementById("tableId").innerHTML This part refers to tag div html id in main page and this part document.getElementById("tableId").innerHTML refers the HTML of the Partial Page Response.


Conclusion:

Improve the user experience with Web pages that are richer, that are more responsive to user actions, and that behave like traditional client applications. Reduce full-page refreshes and avoid page flicker. Partial page rendering using iframe is a very simple solution.

References:
1. http://ajax.asp.net/docs/overview/PartialPageRenderingOverview.aspx
2. http://www.w3schools.com/htmldom/dom_obj_document.asp
3. http://www.w3schools.com/tags/tag_iframe.asp
4. http://www.oracle.com/technology/tech/blaf/specs/ppr.html
5. http://download-west.oracle.com/otn_hosted_doc/jdeveloper/904preview/uixhelp

/uixdevguide/partialpage.html

source
http://www.articleorange.com
video
http://www.youtube.com/watch?v=SDE782cFm7M

What Do You Know About Blog and Ping Software Review?

Word press and RSS to Blog are one of the most widely used software’s by people all over the world. They automatically send information about your site and links to all the blog sites at the click of a button. They can save your list of links for future use also. Very good software to use for blog and ping is BlogSolution. BlogSolution is one of the most advanced blog, ping and SEO software’s around. It brings in results quickly and it is very effective or visit www.software-index-website.com. You can make your site easily searchable by search engines as the software posts your links directly on thousands of blogs in one go. The whole process can be achieved thorough the click of a button.

BlogSolution can at least creates hundred blogs a second using its own highly advanced blogging platform. BlogSolution is a multi manager as all your BS2 domains can be managed from a single place. It easily interlinks your sites and also posts one way links to your sites. There is a very good feature by the name of Smart jobs that automatically creates blogs in your absence.

BlogSolution indexes entries very quickly using the Indexing Turbocharger which leads to extremely fast indexing. BlogSolution can get more spiders than any other blog software at present. The interface of BlogSolution is extremely simple to use and is very user friendly. You can learn the full use of BlogSolution within a short period of time with its help tool. There are also video tutorials provided with the package. Using BlogSolution you can post hundreds of one way links on various sites.

If you use BlogSolution then you don't need to use any other blogging solution. BlogSolution takes care of all your blogging needs and brings in positive results for your site very quickly you can visit www.scripts-to-sell.com if you would like to test BlogSolution first before buying it then you can download demo software from the BlogSolution website. Once you are satisfied with its performance you can buy the software online paying through your credit card.

Hence we see that BlogSolution is the best blog solution ever found. The users of BlogSolution have found it to be extremely effective and also they would not like to use any other blog software. It is the next generation software that has taken the world by storm with its simple user interface and bringing in positive results for all its customers. Watch your income grow as more and more people can find your website on the search engines and order for products and services.

Article Directory: http://www.everyonesarticles.com

www.viral-toolbar-builder.com www.software-index-website.com

video

http://www.youtube.com/watch?v=BIwqMXPjnLU

How to Make an Animation Movie with iKITMovie

When animation is used for films or movies, each frame is produced on an individual basis. Frames can be produced using computers or photographs of images that are either drawn or painted. Frames can also be generated by altering a model unit in small ways and using a special camera to take pictures of the results.

Though the work of producing animated movies and cartoons can be intense and laborious, computer animation can make the process much faster. Computer technology is steadily improving, and professionals are able to create life-like characters using computers and special animation software. However, skilled animators are still necessary for producing quality animations. After all, computers are not yet capable of making artistic choices and bringing real passion to simple images.

To create a stop motion animation, you need a webcam or a basic digital camera (preferably mounted on a tripod for stability) and any video editing software (like Windows Movie Maker, Adobe Premiere Pro, Apple iMovie, etc)

Instructions

Step1 : Open your iKITMovie program (making sure your webcam is connected and working)

Step2 : Choose your preferred resolution for capturing your images.
Its best to use 640x480 pixels if your camera can handle it!

Step3 : Lighting is essential - so make sure your desklamp or even better desklamps are on and directed downward on your object that you want to animate.

Step4 : Make sure you turn off your auto White balance feature on your camera and ensure that you focus either manually or digitally (depending on the camera you have). Now start taking snaps moving the object bit by bit for each frame and enjoy.

Bonus Tip: If you like your characters to jump in the stop-motion animation video, attach them to a thin wire and lift the wire a few centimeters in each frame. If the wire is of the same color as the background, it won't be visible in the final movie.

http://www.articlezap.com

video
http://www.youtube.com/watch?v=31xvh-itXiM

Can free security save the web?

Protecting Windows from malware has always been a sensitive subject for Microsoft, given that many people blame the software giant for causing all the problems in the first place.

The operating system is notoriously susceptible to attack and Microsoft has known for several years that it must do more to protect users. Malware is so pervasive that one could argue Windows isn't fit for purpose out of the box - any PC connecting to the web without security software is living on borrowed time.

Therefore, Microsoft has been treading a fine line since launching its OneCare security product two years ago. Critics claimed the firm was charging Windows users for a second product just to make sure the first operates safely. It's a bit like buying a car, only to find out the brakes are an added extra.

This is slightly unfair on Microsoft, whose software is a target for hackers and malware writers largely because it's so widely used. And over the past 10 years we've come to terms with having to fork out for third-party security software.

However, if you believe some of the headlines over the past few weeks, that may be about to change. Microsoft announced that it will kill off OneCare next summer, and replace it with a free antivirus product codenamed ‘Morro'.

At the heart of this strategy is a drive to increase the number of computers with antivirus protection installed. Microsoft cites some pretty worrying statistics to explain the problem: as many as 50 percent of computers aren't properly protected. This seems an incredible figure to those of us who have been studiously installing and updating antivirus for years.

Microsoft contends that many consumers are confused by the bloatware that's preinstalled on brand-new PCs - they think a trial version of Norton ensures they've got security sorted, blissfully unaware that it can become a hindrance once the 90-day trial is up. So, despite running on OneCare's less-than-convincing antimalware engine, Morro will be better than nothing.

But Morro won't include the bells and whistles provided by specialists in the field, such as Symantec, McAfee and Kaspersky; the latest suites offer a combination of malware protection, PC optimisation, antispam and backup features. Microsoft is unlikely to provide these for free because of antitrust concerns.

However, if Morro convinces those who take a slack attitude to security to finally get some antivirus protection, their systems will present less of a threat to the internet at large. Unprotected PCs are an easy target and many of them are recruited into the botnets responsible for distributing malicious code in the first place.

So while the prospect of a Microsoft product that's secure out of the box remains a distant one, Morro is a step in the right direction that could benefit us all.

Do-it-yourself security

You can, of course, already get security software for nothing. Some of the products that made it into our list of the 50 best Windows programs provide decent protection from online threats, and many of them are free.

The only problem with a do-it-yourself security setup is that it takes a bit of managing and updating. You won't be able to leave these products running in the background unattended. But for those who like to get their hands dirty, there are some real gems. Pick up a copy of our February issue to see for yourself.

http://www.pcadvisor.co.uk/blogs/index.cfm?entryid=107724&blogid=4

Eliminate the copies of backup created during the installation of patch and Service Pack

Often it can turn out useful to recover precious space on disc. Every time that modernizations and Service Pack for Windows are settled, the operating system, usually, memorize a copy of all the modified rows or eliminates you of folder \ WINDOWS or \ WINNT inside.

This operation comes completed in order to render, possible eventually, in a successive moment, the reinstallation of one or more patch (or than an entire Service Pack). In case, after a period of test, problems are not found - concluded the installation of or more patch - are possible to eliminate all the rows of backup being recovered therefore a space P2o on fixed disc.

Now go in folder \ WINDOWS or \ WINNT therefore activated the visualization of folder and the hidden rows (the menu Instruments, Options folder, Visualization, Visualizes files and hidden rows). You would have to see to appear, in folder \ WINDOWS or \ WINNT, a list more or less thick than file whose name begins for $NtUninstall or with $NtServicePackUninstall.

In case, after the installation of the several ones patch, you have not found some relative problem to the operation of yours personal computer, is possible to proceed to the elimination of the entire file $NtUninstall and $NtServicePackUninstall.

http://www.softwaretipspalace.com/MS_Windows_XP/Tips-and-Tricks/eliminate_the_copies_of_backup_created_during_installation_of_service_pack.html

Java Script Framework For Building Google Maps In Minutes

The Google Maps API has been available for free, public-facing sites since February 2005. A remarkable variety of sites (over 30,000 in number) have already integrated Google’s mapping technology using this API. New sites are being built every day.

I have been doing Google Maps on and off from the very beginning. Over time it became clear to me that from the intergration perspective the Google Maps applications should be cost-effective to build, quick to deploy and easy to maintain for both simple websites and complex web applications. While pursuing these goals, I became aware several of important aspects of dealing with Google Maps:

  • There are many good Google Maps tutorials out there; my favorite tutorial is one by Mike Williams

  • Geocoding is great, but some manual work will always be needed; we had to correct spelling errors in addresses and massage addresses a bit before they get properly coded

  • Adding markers to Google Maps is slow (specifically on IE) if you want to place over 100 markers on the map at the same time; this is easily resolved by using elabel.js JavaScript library that is much more lightweight and can handle many more markers; even when using elabel.js avoid adding GEvent to each marker at all costs

  • When using AJAX to bring the map data from the server don’t use XML as data format – use JSON objects as described previously; the JSON objects are hundred times faster and less CPU intensive

  • Make sure that you know JavaScript very well; all over the place Google Maps API uses closures and lambda functions; the structured error/exception handling is not there – if you do something wrong all you will ever get is "Object blah is null"; all Google's own code is obfuscated so you can't understand anything in there; you also have to know how prototype-based inheritance works in JavaScript

  • Obfuscate your JavaScript; the obfuscation is a snap using modified version of Rhino JavaScript engine called Dojo's Compressor, which is a part of Dojo Toolkit; it handles most of JavaScript correctly except for some forms of eval(); this code below will not work after it is obfuscated:

function foo(msg){
alert(msg);
}

function do(){
var msg = "Here we go!";
eval("fo" + "o(msg);");
}

do();

While learning in depth about the Google Maps, I have created several rather complex working map applications. You can view them here:

The examples show the basics of working with Google Maps, but this is not the most intersting thing about them. These examples are built using custom JavaScript application framework. In this framework, the metadata is used to define the filters on the right-hand-side and a table at the bottom of the page. The main metadata file is called manifest.js and it defines all the information about the map. Here is an example of the JavaScript metadata required to configure the Timothy’s Coffee page:

//
// These are the generic framework objects
//


// zone represents one page with map, filters, and table
function oygZone(width, name, desc, url, columns,
rowBuilder, popupBuilder, iconChooser, filtersCaption, filters){ ...}

// column is a column in the data table
function oygColumn(propName, caption, isNum, isSelector, align){ ... }

// all kinds of filters are here
function oygDropDownActionList(caption, lambdas, captions, selIndex){ ... }
function oygOneOfOrAllDropDownFilter(caption, lambdas, captions, selIndex){ ... }
function oygCheckboxFilter(caption, propLambda, checked){ ... }
function oygSubStringFilter(caption, propLambda){ ... }

// spacer without text
function oygStaticSpacer(){ ...}

// old static text
function oygStaticText(caption, align, clazz){ ... }

//
// These are the resources used by the Timothy’s Coffee
//

var columns = [
new oygColumn("wf", "Wireless Internet", false, false, "center"),
new oygColumn("sadd", "Address", false, true, "left")
];


function iconChooser(item){
return "";
}


function rowBuilder(item){
var check = "";

var addNoCou = item.add;
var idx = addNoCou.indexOf(", Canada");
if (idx != -1){
addNoCou = addNoCou.substr(0, idx);
}

return [
item.wf ? check : "",
addNoCou
];
}

function popupBuilder(item){
var bull = "• ";

var buf =
"
" +
"Timothy's World Coffee
" +
item.add + "
" +
"Phone: " + item.pho + "

" +
(item.wf ? bull + "Wireless Internet / Hot Spot" : "") +
"
";

return buf;
}

var filters = [
new oygStaticSpacer(),
new oygSubStringFilter("Address Contains", function(row){ return row.add; }),
new oygStaticText("for example: 'M2M' or 'Yonge'", "center"),

new oygStaticSpacer(),
new oygCheckboxFilter(
"Wireless Internet", function(row, checked){ return !checked || row.wf; },
false
),

new oygStaticSpacer()
];

var oygZones = [
new oygZone(
748, "Timothy's World Coffee", "Timothy's World Coffee (Fall 2006)", "json/20060826/web.js",
columns, rowBuilder, popupBuilder, iconChooser,
"Find Timothy's Near You", filters
)
];

The raw map data is loaded from JSON objects. The JSON objects can have any number of attributes. These attributes in the implementation of the filters and the rowBuilder() and the popupBuilder() callback functions. Here is a part of the complete data file for Timothy’s Coffee page:

"oygMarkers": [
{
"lat":50.994854, "lon":-114.071649,
"add":"6455 Macleod Trail SW, Calgary, AB, T2H 0K3, Canada",
"sadd":"6455 Macleod Trail SW, Calgary",
"pho":"(403) 259-2274", "wf":false
},{
"lat":51.064404, "lon":-114.096480,
"add":"1632 14th Avenue NW, Calgary, AB, T2N 1M7, Canada",
"sadd":"1632 14th Avenue NW, Calgary", "pho":"(403) 210-1266", "wf":true
}, {
"lat":51.046615, "lon":-114.067489,
"add":"225 7 Avenue SW, Calgary, AB, T2P 2W3, Canada",
"sadd":"225 7 Avenue SW, Calgary", "pho":"(403) 266-5457", "wf":true
},
...
...
...
]
}

Now, when I have the framework for building maps, it takes me minutes to create a new working Google Maps application. No manual work – the map, the filters and the table are all built on the fly as defined by the metadata. I am able to reuse the most of the framework's code without actually changing it. Thus, less skill is required from a JavaScript developer to build a new map using the framework. The developer doesn't even need to know how Google Maps actually work...

http://www.softwaresecretweapons.com/jspwiki/javascriptframeworkforbuildinggooglemapsinminutes

Norton software conflicts with Windows XP SP3

Antivirus software from Symantec Corp. may cause the installation of Service Pack 3 for XP to corrupt the Windows Registry by adding unnecessary keys.

Symantec advises users to disable the SymProtect security feature of its products before applying XP SP3.

A Registry fix is needed by the latest XP patch

The latest in the continuing series of problems related to Windows XP Service Pack 3 involves Symantec's Norton AntiVirus. The company recommends that users disable the program's self-protection feature before installing XP SP3.

In a post to Norton's support forum, Symantec senior SQA manager Reese Anschultz suggests that customers disable the SymProtect feature found in various Symantec security products:

To do this in Norton Internet Security 2008 and Norton AntiVirus 2008, uncheck Turn on protection for Norton products on the Options pages of these programs prior to installing XP SP3. Once the service pack is in place, return to the Options page and re-enable this setting.

In Norton SystemWorks 2008, open the Advanced Options under Settings, click Next, choose Norton SystemWorks Options, and select the General tab. Uncheck Turn on protection for my Symantec product.

For other Norton products, read Anschultz's post, which comments that other third-party security products may also cause problems unless some functions are disabled before SP3 is installed.

Some Windows Secrets readers have had to remove Norton AntiVirus completely before deploying XP SP3. While taking this step may sound extreme, reader Bert Smith from Australia was told by a Symantec engineer that he should "follow these instructions" to use the Norton Removal Tool to uninstall Norton Internet Security 2008 before he deployed Vista SP1!

Given what we now know, it may actually be wise for you to uninstall Norton antivirus products prior to deploying SP3, which is XP's latest — and last — service pack. Thanks to reader Jan Levine for identifying this issue in the Microsoft TechNet Forums.

If you find that installing XP SP3 has corrupted your Symantec security product, my fellow MVP Bill Castner has devised a downloadable Registry fix (scroll down the page until you see My "Fix"). Castner first identified this issue along with Jesper Johansson, who's been tracking the XP SP3 problem in his blog.

Johansson also provides a patch for AMD computers that XP SP3 causes to reboot constantly. I described this problem in my May 22 column. For more information, see the following item.

A cure for XP SP3's never-ending reboots

If you're one of the folks whose AMD-based PCs constantly reboot after applying XP SP3, here's how you can recover.

When the system is first booting, press F8 to enter Windows' Safe Mode. Log into the Administrator acccount, click Start, Run, type cmd, and press Enter. When the command window opens, type the following command (don't forget the space after the equals sign, which is required):

sc config intelppm start= disabled

The problem is caused by the presence of Intel drivers on AMD-based systems. Follow the above steps only if you know your PC uses an AMD processor; doing so on an Intel-based machine could render the system unusable.

To determine which processor your system uses, open the Control Panel's System applet and click the General tab. (See Figure 1.)
Figure 1. Open the System applet in Control Panel to verify the type of processor your PC uses.

If the processor listed in the window is "Intel," do not enter the command shown above. An Intel-based system that constantly reboots may be having an unrelated problem. It might be caused by conflicts with antivirus products, as described in my previous item, or something else entirely that no one has yet identified.

As I wrote in my May 22 column, there's no rush to install XP SP3. Wait until we know more about these kinds of conflicts.

http://windowssecrets.com/2008/05/29/02-Norton-software-conflicts-with-Windows-XP-SP3

video
http://www.youtube.com/watch?v=p0pBQsAlgVU

How to submit blogger sitemap successfully and fix the maximum limit issue?

Sitemap is a way to tell search engines about pages on your site that they might not otherwise discover. Google provides a way to submit sitemaps through Google Webmaster Tools.how to create and submit sitemap? For self hosted sites submitting sitemaps is easy but for blogspot(blogger) hosted sites there is a tweak involved. If you don’t follow this tweak; either you will end up with a WARNING and/or the maximum number of URLs submitted will be shown as 26.

So, how can you submit sitemaps for a blogger hosted blog without any warnings or errors?

Suppose you have a site called abc.blogspot.com. The sitemap for this site would be: http://abc.blogspot.com/atom.xml?redirect=false&start-index=1&max-results=100 meaning simply append atom.xml?redirect=false&start-index=1&max-results=100 to your blog URL in Google Webmaster Tools > Sitemaps > Add Sitemap.

Food for the thinking mind: If you don’t include redirect=false, webmaster tools will throw warning. If you don’t include start-index=1&max-results=100, the maximum number of URLs submitted will be shown as 26, always. Remember if you have more than 100 pages on your blogspot site, you would need to include two sitemaps one having an index starting with 1 and ending with 100(start-index=1&max-results=100) and the other having an index starting with 100 and ending with 200(start-index=100&max-results=200)

If you want to keep track of further articles, I recommend you to subscribe to this blog's RSS feed. You can also subscribe by Email and have new articles sent directly to your inbox.


http://reviewofweb.com/blogging/how-to-submit-blogger-sitemap-and-fix-maximum-limit-issue/

video
http://www.youtube.com/watch?v=BsUMd_Sb3n0

Firefox Configuration Tips and Tricks

Use the Firefox browser :

If you type about:config in address bar of firefox, it will open its configuration page, that allows to change a lot of settings.

Here I'm presenting some settings, that can be configured. These tweaks have been tested on high speed networks and cable, they might need to be tweaked for slower connections.

You can follow any of these two steps to config your browser:

  1. Use the about:config screen which lets you add, modify or reset values or
  2. Manually add all of these hacks to the prefs.js.

Option 1:

Type about:config at the location/url bar, this will list all current references, and you can change the settings listed below.

Option 2:

  • Exit out of the browser completely, because if you don't close the browser, it'll overwrite the settings to default.
  • Find your prefs.js file (usually in Drive:\Documents and Settings\USER_NAME\Application Data\Mozilla\Firefox\Profiles\... directory in Windows XP).
  • Backup your prefs.js file.
  • Ccopy and paste the hacks listed below at the BOTTOM of the file.
  • Save the prefs.js file and restart your browser .
<---- Begin copy selection Below this line ---->

// Performance
// This will allow Firefox to maintain it GUI memory so that the browser window
user_pref("config.trim_on_minimize", false);

// Specify the amount of memory cache:
// -1 = determine dynamically (default), 0 = none, n = memory capacity in kilobytes
// If you have the memory to spare, enabling this will run things a little smoother
user_pref("browser.cache.memory.capacity", 65536); //<-- thus equal about 64 megs, drop down to less if you can't spare the RAM

// Remove painting delay when loading pages
user_pref("nglayout.initialpaint.delay", 0); // Default is 250

user_pref("content.notify.ontimer", true); // Turn on timer-based reflow management

user_pref("content.notify.interval", 100); // Sets the allowed time between reflows in microseconds

// Set the number of reflows to do before waiting for the rest of the page to arrive
user_pref("content.notify.backoffcount", 200);

// Other Tweaks
user_pref("content.max.tokenizing.time", 3000000);
user_pref("content.maxtextrun", 8191);

// Enable Improve pipelining:
user_pref("network.http.pipelining", true);
user_pref("network.http.proxy.pipelining", true);
user_pref("network.http.pipelining.firstrequest", true); // Default is false
user_pref("network.http.pipelining.maxrequests", 8); // Default is 4

// Increase Multi-Threaded Downloading performance
user_pref("network.http.max-connections", 96); // Default is 24 <-- Use this for modems user_pref("network.http.max-connections-per-server", 32); // Default is 8 <-- Use this for modems user_pref("network.http.max-persistent-connections-per-proxy", 24); // Default is 4 <-- Use this for modems user_pref("network.http.max-persistent-connections-per-server", 12); // Default is 2 <-- Use this for modems

// Other Tweaks
user_pref("network.dnsCacheExpiration", 86400);
user_pref("network.dnsCacheEntries", 256);
user_pref("network.ftp.idleConnectionTimeout", 60);
user_pref("network.http.keep-alive.timeout", 30);

user_pref("ui.submenuDelay", 0);

user_pref("dom.disable_window_status_change", true);

// Shows an error page instead of an error popup dialog, have been using this for a long time now
// found this useful if you load multiple pages at the same the dialog box actually holds up the browser
// using this will allow the other pages/elements to load for the rest of the pages
user_pref("browser.xul.error_pages.enabled", true);

// Searching & Type Ahead
// Change to normal Google search:
user_pref("keyword.URL", "http://google.com/search?btnG=Google+Search&q=");

// Find As You Type Configuration:
// Set this pref to false to disable Find As You Type:
user_pref("accessibility.typeaheadfind", true);

// If you set this pref to true, typing can automatically start Find As You Type.
// If false (default), you must hit / (find text) or ' (find links) before your search.
user_pref("accessibility.typeaheadfind.autostart", true);

// Set this pref to false if you want Find As You Type to search normal text too:
user_pref("accessibility.typeaheadfind.linksonly", false);

// Set this pref to true if you require that the link starts with the entered text:
user_pref("accessibility.typeaheadfind.startlinksonly", false);

// This is the time in milliseconds for the Find As You Type to stop watching for keystrokes:
user_pref("accessibility.typeaheadfind.timeout", 3000);

// User Interface
// Enable Bookmark Icons (I love this feature)
user_pref("browser.chrome.site_icons", true);
user_pref("browser.chrome.favicons", true);
user_pref("browser.chrome.load_toolbar_icons", 2);

// Do not Reuse Active Mozilla Browser, create a new one for email links etc.
user_pref("advanced.system.supportDDEExec", false);

// Disable Smooth Scrolling (found it faster to have this off)
user_pref("general.smoothScroll", false);

// Allows for faster mouse scrolling
user_pref("mousewheel.withnokey.numlines", 6); // Adjust this accordingly - Default = 1
user_pref("mousewheel.withnokey.sysnumlines", false); // This must be set to false in order to read previous line

user_pref("extensions.disabledObsolete", true);

user_pref("browser.display.show_image_placeholders", true);

// 1.0 Preview disables dynamic theme switching, this re-enables dynamic theme switching.
user_pref("extensions.dss.enabled", true);

// Mail & News
user_pref("mailnews.start_page.enabled", false);

// always send messages in MIME format (both plain- and HTML-formatted)
user_pref("mail.default_html_action", 3);

// The follow two are disabled for security reasons
user_pref("mailnews.message_display.allow.plugins", false);
user_pref("javascript.allow.mailnews"),

user_pref("alerts.totalOpenTime", 7000);

// Disable this for performance and security issues when reading emails
// Security issues? Yes especially with zlib, jpeg, png and all kinds of new attacks coming out,
// displaying images inline can no longer be taken for granted. This only affects attached graphics.
user_pref("mail.inline_attachments", false);

user_pref("mailnews.show_send_progress", false);

// Security
// Just to make sure, disable windows shell: protocol
user_pref("network.protocol-handler.external.shell", false);

// Show full path for plugin file on about:plugins page
// the full path was removed for security purposes, please keep that in mind
// Note: showing full paths can be a security risk only use when debugging.
user_pref("plugin.expose_full_path", false);

<---- End copy selection Above this line ---->


There are Three files, that can be edited to configure Firefox!

user.js :-
Used to change various preferences.
userChrome.css :-
Used to change the appearance of the browser.
userContent.css :-
Used to change the appearance of web pages.

All these files are plain text files stored in your profile folder, and can be edited using a standard text editor, such as Notepad on Windows and gedit or kate on Linux.

The Profile Folder :

The profile folder is where Firefox saves all your settings and refers to a location on your hard drive.

On Windows XP/2000, the path is usually

%AppData%\Mozilla\Firefox\Profiles\xxxxxxxx.default\, where xxxxxxxx is a random string of 3 characters. Just browse to C:\Documents and Settings\[User Name]\Application Data\Mozilla\Firefox\Profiles\ and the rest should be obvious.

On Windows 95/98/Me, the path is usually C:\WINDOWS\Application Data\Mozilla\Firefox\Profiles\xxxxxxxx.default\ .

On Linux, the path is usually ~/.mozilla/firefox/xxxxxxxx.default/ .

On MacOS X, the path is usually ~/Library/Application Support/Firefox/Profiles/xxxxxxxx.default/ .

Firefox is capable of handling more than one user and thus, more than one profile. The path examples above refers to the default profile that is automatically created when you start Firefox for the first time. You can manage any number of profiles by using the Profile Manager.

%AppData% is a shorthand for the Application Data path on Windows 2000/XP. To use it, click Start > Run..., enter %AppData% and press Enter. You will be taken to the "real" folder, which is normally C:\Documents and Settings\[User Name]\Application Data.

user.js :-

This is the main preferences file for Firefox and is located in you profile folder. The file does not exist by default, so you need to create it before you can start adding your preferences.

userChrome.css :-

This file sets the display rules for various elements in the Firefox user interface and is located in the sub-folder called chrome in your profile folder. As with user.js, this file does not exist by default, so you need to create it before you can start adding your preferences. There's actually an example file that exists by default, called "userChrome-example.css". Basically, you can just rename that file by removing the "-example" part.

userContent.css :-

This file sets the display rules for web content and is located in the sub-folder called chrome in your profile folder. As with user.js, this file does not exist by default, so you need to create it before you can start adding your preferences. As with userChrome.css, there is an example file that exists by default, called "userContent-example.css". Basically, you can just rename that file by removing the "-example" part.


http://www.programmerworld.net/articles/software/firefox_tips.php

Why Defragment Disks?

Hard disks are by far the slowest component in your computer. CPU and memory work much faster than hard disks because they do not have moving parts. Therefore fragmented disks often become a bottleneck of the system performance.

Besides causing slowdowns, fragmentation makes the disk drive heads move too much when reading files which leads to freeze-ups and system crashes. It is important to keep your disks defragmented and optimized as much as possible.

Auslogics Disk Defrag was designed to remedy system sluggishness and crashes caused by disk fragmentation. It is optimized to work with today's modern hard disks. Auslogics Disk Defrag is extremely simple to use, does not require any analysis phase and is faster than most of the other disk defragmentation software. It will help you get the maximum performance out of your expensive hardware investments. And, what’s most important, it's absolutely free.
Defragmentation Explained

Fragmentation is caused by creating and deleting files and folders, installing new software, and downloading files from the Internet. Computers do not necessarily save an entire file or folder in a single space on a disk; they're saved in the first available space. After a large portion of a disk has been used, most of the subsequent files and folders are saved in pieces across the volume.
Fragmentation Maps
Fragmentation Maps [+]

When you delete files or folders, the empty spaces left behind are filled in randomly as you store new ones. This is how fragmentation occurs. The more fragmented the volume is, the slower the computer's file input and output performance will be.

Defragmentation is the process of rewriting non-contiguous parts of a file to contiguous sectors on a disk for the purpose of increasing data access and retrieval speeds. Because FAT and NTFS disks can deteriorate and become badly fragmented over time, defragmentation is vital for optimal system performance.

In June 1999 the ABR Corporation of Irvine, California, performed a fragmentation analysis and found that, out of 100 corporate offices that were not using a defragmenter, 50 percent of the respondents had server files with 2,000 to 10,000 fragments. In all cases the results were the same: Servers and workstations experienced a significant degradation in performance.

I have been a computer consultant since the late 1980's. Working on a host of machines from single PC home users, and complex business networks. I have been recommending and using Auslogics Defrag for the last two years for my clients and have found it to be the BEST defrag program on the market. I have compared multiple software packages and always come back to Auslogic. Especially for Vista users, in my opinion the Auslogics Defrag program simple works better than the default OS defrag program. I highly recommend this software and have not had any instances of incompatibility, which is extremely rare with any software. With over 1500 PC's that I service, this program is a crucial tool for my business. Highly recommend, and always, please support the software companies that make good software work great.

http://www.auslogics.com/disk-defrag

video
http://www.youtube.com/watch?v=6EUEA63LMRU

Firefox 3.1 beta 2, now with private browsing

Mozilla has announced the official release of the second Firefox 3.1 beta. This version introduces the new private browsing mode feature and several other noteworthy changes.

The 3.1 roadmap began to coalesce after the release of 3.0 earlier this year. Firefox 3.1, which is codenamed Shiretoko, will include many improvements and several important features that were originally planned for the 3.0 release but were deferred for various reasons. Mozilla released the first 3.1 alpha in July with some new CSS features, AwesomeBar completion enhancements, and a new user interface for switching between tabs. The second alpha, which arrived in September, introduced support for the HTML 5 video element.

The most significant addition in beta 2 is the new private browsing mode, which will not store any of the user's session information while it is enabled. The feature is similar to the Incognito mode that is offered in Google's Chrome browser and Safari's Private Browsing. The feature was first requested for Firefox back in 2004, but extensive reengineering was required to make the feature work.

The developers drafted a functional specification to document the expected behaviors. Developer Ehasan Akhgari, who participated in making the private browsing feature, wrote several blog entries about the feature at various stages of development. In October, User interaction expert Alex Faaborg discussed some of the relevant user interface issues and revealed that Mozilla would be using a mask icon as the visual metaphor for privacy in Firefox.

http://arstechnica.com/news.ars/post/20081208-first-look-firefox-3-1-beta-2-now-with-private-browsing.html


download for free:
http://rapidshare.com/files/171527398/MyEgy.CoM.Mozilla_Firefox_3.1_Beta_2.by.saysay2000.rar.html

http://www.zshare.net/download/52444824565bb086/

video
http://www.youtube.com/watch?v=6tXTuQXDGiY
Custom Search
Powered By Blogger