0

Amp, Volt, and Watt? (Electricity Principles)

Posted by PinkDolphin101 on 3:12 AM in
Although we can't necessarily see electricity, we can measure it by its effects. An ampere, or amp, represents the amount of current in a circuit. Voltage is defined scientifically as the circuit's "potential difference," and can be seen as the amount of "pressure" that drives electricity in a circuit. Watts are a measure of the use of electrical power, and one watt is equal to one volt multiplied by one amp.

In this case it is helpful to use an analogy to help us understand how each of these terms relate to each other. One commonly used analogy is that of the garden hose. The water pressure in the hose is like the voltage, and the amp value is like the volume of water flowing through the hose. The wattage, then, is the total amount of water that comes out of the hose, per unit of time.

If we replace the hose in the above analogy with an electrical wire, it is simple to see how they relate. In an electrical circuit, the voltage may be 220, as it is in most electrical outlets in all the world except the United States and other few countries. Most appliances are meant to run on this voltage, although each operates at different amp levels, and therefore at a different wattage.

An appliance which uses a large amount of current, such as an electric stove, may be on a separate circuit with a higher voltage. There must be more pressure -- or voltage -- to supply the needed amps to the appliance, because it has a higher wattage. In other words, it uses up more current, or amps, per unit of time, than another appliance. Without the higher voltage, it wouldn't run, because it would be "starved" for the amps it needed to be able to operate.

Another electrical term that helps tie together the other three is the ohm, which is a unit of electrical resistance. Going back to the garden hose analogy, if the hose has a larger diameter, more water will be able to flow through. A circuit with a high resistance value, expressed in ohms, is able to carry fewer amps than one with lower resistance, regardless of the voltage. If a high voltage encounters high resistance, the amount of possible amps in the circuit will be very low -- not much water will get through a narrow hose, no matter how high the pressure is.

Electrical consumption is measured in watts, or watt-hours, and this is the basis on which a power company bills a customer for electrical usage. For convenience, power companies measure consumption in a unit called a kilowatt-hour, which is the equivalent of using 1000 watts of power for one hour. The average household uses hundreds of kilowatt-hours per month.

|
1

Biometrics

Posted by Hany on 1:40 AM in , ,
Biometrics are used to identify people based on their biological traits. This growing technological field has deep implications because proving identity is becoming an integral part of our daily lives. To uncover the prevalence that biometrics could attain, let us consider an ordinary day of Thomas.

Thomas wakes up in the morning, and checks his email on his computer. His service provider requires that Thomas confirm that it is he that is checking his account; instead of entering a username and password, he presses his thumb against a biometric scanner on his keyboard. The system confirms that it is, in fact, Thomas and grants him access to his messages. On his way to work, he enters his car but instead of using a key to identify him as the owner of the vehicle, another biometric scanner checks his fingerprint to confirm that he has the permission to enter.

A biometric hand-recognition device confirms that it is Thomas and allows him entrance to the building at work. Thomas' work computer uses a voice-recognition system that requires him to say a short phrase; it recognizes him and confirms that he has permission to access all programs and files on the computer network.

For lunch, Thomas takes a short trolley ride with some colleagues to a nearby restaurant. Upon entering the trolley, a biometric face-recognition camera recognizes him and automatically bills his bank account. At the restaurant, he pays by 'credit card' but instead of using an actual card, he presses his thumb against a portable biometric scanner. No signature is required and he is authenticated almost immediately. On his way home from work, he stops at the library to pick up a book, and he checks it out with a biometric retina scan instead of a library card.

It is evident that our identity is frequently required; For the time being we use a wide assortment of methods to verify our identity: usernames, passwords, signatures, keys, cards etc. Biometrics allow us to authenticate ourselves with things that we carry with us wherever we may go, such as our hands, eyes, voices, faces, fingerprints etc. In addition to the convenience, biometrics can be much more effective; a key or card, for example, can fall into someone else's hands. The promise of ease and increased security are perhaps biometric's most appealing features.

The downside of all of the benefits of biometrics is personal privacy. If we are going to be identified at various points during the day, will large companies, the government or other institutions track our behaviors and use this data in unanticipated or invasive ways? These questions must be addressed before biometrics have the chance of pervading our daily lives.

|
0

Digital Camera advices

Posted by Hany on 1:11 AM in ,
Digital cameras have changed the way photographs are taken and stored. With so many models of digital cameras on the market and in so many different price ranges, it can be difficult to know what to look for when buying one. Basically, there are three factors to consider: resolution, storage capacity, and special features for picture taking and storage.

The resolution of the photographs taken with a digital camera is measured in pixels. The greater number of pixels or dots per inch, the higher the resolution of the finished photograph will be. Digital cameras generally range from 2.0 megapixels to 5 and above. The next generation of digital cameras have higher and higher resolutions as consumers are looking for detail and color accuracy in their images. Older digital technology used under 2.0 megapixels.

Since film is not used in digital photography, the images are stored on internal disks in the camera. Most mid-priced digital cameras do not contain very much storage space and this requires users to download their pictures to their computer before snapping more shots.

More memory can be added to the interior of the camera, just like in a computer. A less expensive method is to add memory by using a small card that pops into the camera. These disks can be purchased with capacities between 64MB all the way up to 2 gigabytes. Larger disks can can hold hundreds of pictures at their highest quality!

Digital cameras come with an assortment of options and features. Some digital cameras can use wide-angle and telephoto lenses for added flexibility. Also when buying a digital camera, it is important to look at how easily the photos can be off loaded to a computer for storage or sharing and how easy they are to print. Some digital cameras come with docking systems that easily transfer the digital images to the computer. The images can then be modified using special editing software, stored to the hard drive or CD-ROM and even emailed.

|
0

iDog

Posted by Hany on 12:38 AM in , ,
An iDog® or any type of iPet, such as iFish or iCat or even the iPenguin, is a clever children’s toy that allows you to use your MP3 player in a very different way. These little animals, which are made of hard plastic and come in a variety of colors, interact with music and will "dance" when they are either plugged into an MP3 player or placed next to a loud speaker. The toys also respond to things like being petted, and are similar in some ways to toy predecessors like Furbies®. The latest version, the iDog Amp’d has more interactive features, even sulking if you don’t allow it to listen to music regularly.

Though one might hope that the iDog, made by Hasbro®, would really do some serious dancing, the toy is a bit more passive. It may shake its head a little, raise its ears up and down, and tap its foot, but it won’t be dancing all over your table or floor. Its principle use, which can be especially helpful in the Amp’d version is as a speaker so you can play your MP3 player without earphones. The earliest versions of iPets did not have stereo sound, so they weren’t necessarily for music aficionados. Amp’d versions of the toy feature stereo sound, about equivalent to an inexpensive boom box, and often good enough for kids who’d like to share music with friends.

Volume on the toy is adjustable, usually by moving the dog’s tail up and down. You can also set the volume on the MP3 player lower if you’d like to turn down the Amp’d iDog. The Amp’d version does tend to play at a higher volume than others around you might appreciate, so it takes a little adjusting. You can also purchase iPets in a variety of colors. Some lean toward the distinctly feminine in color and others are more unisex in color.

Age recommendations for the toy are eight years and up, and it does take careful reading of the instructions to get the most out of any iPet. Kids who are not techies and who don’t read directions well may need to get help from parents at first to get their iDog to respond appropriately, and even parents should read any accompanying instructions carefully.

Price is definitely one of the most likeable features about the toy. While you can spend quite a bit to get docks or stereo systems to accommodate your MP3 player, the Amp’d iDog can retail on sale for as little as $20 US Dollars (USD). When not on sale, the average price of the toy is still under $30 USD, and you might want to do some comparison shopping for the best price. Do expect to use batteries aplenty if you use your iPet frequently. Though the pet only takes two AA batteries, you still might want to consider rechargeable batteries if you find yourself using the iDog all the time.

|
0

Red Pixel problem with LCD

Posted by Hany on 12:08 PM in ,
Liquid crystal display (LCD) monitors work basically through grouping of subpixels, one each for the red, the blue, and the green colors. Using these three colors the monitor can create any of the colors needed to display images. Unfortunately, because of the fragile nature of LCD monitors, a number of things can go wrong with them, causing defective pixels of one sort or another.

Two of the main types of monitor defect involve a pixel being stuck either on or off. The first of these, when a pixel is stuck on at all times, is known as a hot pixel. Hot pixels look like pure white marks on the screen, and usually show up most clearly against a dark background. The second of these, when a pixel is off, is known as a dead pixel. Dead pixels are simply black spots on the screen, and show up most clearly against a lighter background.

The third type of defective pixel is a stuck pixel. A stuck pixel can be any of the six main pure colors or color composites: red, green, blue, cyan, magenta, or yellow, but most often is one of the three pure colors. The most noticeable to most people is the red pixel, since it jumps out more clearly than green or blue. A red pixel is exactly what it sounds like, a pixel that has a single red subpixel stuck in the always-on state. Rather than being white or black, therefore, a red pixel manifests as a single red mark on whatever image you’re trying to look at.

Unlike a dead pixel, which is largely unfixable by a consumer, a red pixel can often be fixed fairly easily. There are a number of different ways to go about fixing a red pixel, from software solutions to actually playing around with the hardware. Although some of the fixes aren’t necessarily recommended, because they can cause damage to the screen, others have no negative consequences, so are worth trying. It’s also worth noting that many times a red pixel will simply repair itself after being left alone for a while. The stuck pixel eventually works its way clear and the monitor returns to full operation.

A number of software packages exist to help try to fix stuck pixels, for Mac, Windows, and Linux. These programs work by just flashing colors through the pixel region, trying to jolt it clear with energy. Allow for a few cycles, and hopefully the stuck pixel will just disappear. If not, however, there are two other, more direct approaches you can take.

The first is called the pressure method, and involves applying direct pressure to the red pixel itself. You want to take a slightly damp washcloth and put it over the screen first, to make sure you don’t scratch your screen while trying to fix it. Then take some solid object, like a stylus or eraser, and press through the cloth directly on to the stuck pixel with the monitor off. While applying pressure, turn on the computer or monitor and hold it for a moment longer, then remove the pressure and the red pixel should disappear.

If this doesn’t work, you can try the tapping method. To do this, turn on the screen and open a document that is completely black, or go to a black image somewhere on the internet, or set your desktop background to black. Then take a pen with a rounded cap, like the one found on a Sharpie®, and start tapping on the stuck pixel. When you tap, you should see a bit of white glow through, which will let you know you’re tapping with sufficient force. Continue tapping ten or twenty times and the red pixel should right itself and vanish.

|
0

What is WWW in website name?

Posted by Hany on 10:25 PM in , ,
The quick answer to this question is that there is no difference between the two addresses for most modern domains. For example, typing www.eg.vg or eg.vg into the Uniform Resource Locator (URL) of your Web browser will bring you to this site with equal ease. However, leaving “www” off of some websites can result in the browser being unable to find the site. This problem is correctable by the domain holder. A cursory understanding of how the World Wide Web (WWW) works will be helpful in understanding the problem.

The Internet is a massive network of computers that communicate by using agreed upon protocols. For example, every computer on the Internet is assigned a unique numerical address so that information can be sent and received without being lost. These unique addresses are called Internet Protocol addresses, or IPs for short. In the case of a website the numeric IP maps to a name, as names are easier for surfers to remember than numbers.

The Domain Name System (DNS) database contains a record for each website, which stores the website's name and IP address. When clicking on a link or entering an address in a Web browser, it sends a request to the DNS database to resolve the name to the corresponding IP address. If the “www” prefix is left off and the browser stalls, it is likely that the DNS record does not contain the short version of the domain name: the version without the “www.”

Once the name is found in the DNS database it is resolved to the corresponding IP. This allows the browser to create a connection with the server that hosts the site. It requests the page and supplies your IP address, akin to sending a self-addressed stamped envelope. The host server sends the webpage to your computer and the transaction is complete.

In the past many host servers created websites as subdomains under www., following then-current naming conventions. "WWW" identified the server as a Web server, versus a server dedicated to other tasks. As the Internet became more widely traveled by the general public, however, the ubiquitous “www” was often overlooked when entering website addresses into browsers. This resulted in lost website traffic and frustrated surfers, as many DNS records only contained www.example.com, and not example.com for the domain name.

Over time hosts began dropping the “www” designation for Web servers, and domains were created as example.com. To catch traffic that might arbitrarily include “www” in the browser request, DNS records instead included an extra entry to cover this occurrence. CNAME is the DNS tag that maps an alias to the main name in the DNS record: in this case the "www." version of the name. With this solution surfers could include or exclude www. and reach the site either way.

DNS records can be modified to include a mapped alias. If you require this service for your domain(s), contact your domain registrar.

|
0

More Ram ... More Speed !

Posted by Hany on 10:14 PM in , ,
Computer speed is one of the most sought after features of both desktops and laptops. Whether gaming, surfing the Web, executing code, running financial reports or updating databases, a computer can never be too fast. Will adding more random access memory (RAM) increase computer speed? In some cases it will, but not in all.

If RAM is the only bottleneck in an otherwise fast system, adding RAM will improve computer speed, possibly dramatically. If there are other problems aside from a shortage of RAM, however, adding memory might help, but the other factors will need to be addressed to get the best possible performance boost. In some cases a computer might simply be too old to run newer applications efficiently, if at all.

In Windows™ systems you can check RAM usage several ways. One method is to hold down the keys, Ctrl + Alt + Del to bring up the Task Manager. Click the Performance tab to see a graph of RAM resources. Third party freeware memory managers will also check memory usage for you. Some even monitor memory to free up RAM when necessary, though this is a stopgap measure.

If your system is low on RAM or routinely requires freeing RAM, installing more memory should improve computer speed. Check your motherboard before heading to your favorite retailer, however. The board might be maxed out in terms of the amount of RAM it will support. It can also happen that existing memory might need to be replaced if all ports are occupied by 1-gigabyte sticks, for example, on a motherboard that will support greater sticks.

If you are a gamer or work with video applications a slow graphics card might be a contributor to poor performance. A good graphics card should have its own on-board RAM and graphics processor (GPU), otherwise it will use system RAM and CPU resources. Consult the motherboard manual to see if you can improve performance by upgrading to a better card. If your present card is top notch and RAM seems fine, the central processing unit (CPU) is another upgrade that can drastically improve computer speed.

Maintenance issues also affect performance and might need to be addressed to remove bottlenecks. A lack of sufficient hard disk space will slow performance, as will a fragmented drive. Spyware, adware, keyloggers, root kits, Trojans and viruses can also slow a computer by taking up system resources as they run background processes.

In some cases a computer serves fine except for one specific application. Most software advertises minimum requirements but these recommendations are generally insufficient for good performance. One rule of thumb is to double the requirements for better performance. If your system can only meet minimal requirements this is likely the problem.

Taking the measures outlined should improve computer speed unless the system is already running at peak and the motherboard cannot be upgraded further. If so, the only alternative is to invest in a new computer that supports newer, faster technology. With prices falling all the time it should be easy to find an affordable buy that will reward you each time you boot up.

|
0

How Much My Computer Use Electricity ?

Posted by Hany on 9:53 PM in , ,
If you’re trying to save money on your utility bills, you may find yourself wondering how much electricity a computer uses. However, finding the answer to this question can be a somewhat complicated task. The amount of electricity a computer uses depends upon what type of equipment you have and what applications you are running.

Typically, the amount of electricity a computer uses is between 65 watts and 250 watts. The monitor often needs between 35 watts and 80 watts of electricity as well. Most desktop computers have a label that lists how much electricity they need, but this is generally the theoretical maximum and not an average representation.

As you might expect, desktop computers with faster processors use more electricity than computers with slower processors. LCD monitors only use about half the electricity of similarly sized CRT monitors, however. Accessories and peripherals such as cable modems, routers, or webcams contribute to a slight increase in how much electricity a computer uses as well.

Regardless of what type of computer you own, the type of work you do on your computer makes a difference in electrical consumption. Using your computer to edit digital pictures, design a website, or play a video game uses more electricity than reading email or completing simple word processing tasks. In addition, the amount of electricity a computer uses significantly increases when it is connected to the Internet.

One common misconception about the amount of electricity a computer uses is that using a screensaver saves money. A screensaver does not use less energy. It's series of moving images is aimed at protecting the screen from having a static image ingrained into it.

If you’re worried about high utility bills, a better option is to leave your computer in standby mode when it’s not being used. In standby mode, a computer uses approximately 6 watts of electricity and the monitor’s electrical consumption drops to almost nothing. Of course, it’s even cheaper to turn your computer completely off when it won’t be used for several hours at a time.

Although many people prefer laptops because of their added convenience, it is interesting to note that a laptop computer can also result in a significant energy savings. Most laptops use between 15 watts and 45 watts of electricity. Switching to a laptop may be a smart decision if you’re concerned about how much electricity a computer uses.

|
1

laptop battery longer life...!

Posted by Hany on 9:47 PM in , ,
The question of extending laptop battery life is one of the most vexing concerns of many of those who take advantage of mobile computing. A laptop battery can be a very expensive component to replace, with most costing at least $100 US Dollars (USD) and some costing considerably more than that. Therefore, it is to the computer owner's advantage to follow a few simple steps to extend battery life.

Of primary importance is to know the type of battery you have. For example, lithium ion batteries work differently than NiCD or NiMH batteries. The latter two should be fully discharged to avoid a memory effect, which will work to deplete laptop battery life. The lithium ion battery should never be fully discharged.

Whenever possible, avoid needless recharge cycles. Any rechargeable battery has only so many of these. Discharging a battery for a few minutes, then plugging it in could prematurely cause laptop battery life to decrease dramatically. Rather, if you do not need the battery and it is charged or even if it is not charged, take it out and run off the cord.

Most laptop computers have power saving settings. Use them. Nearly every computer expert will tell you this, for a good reason. It simply may be the best way to extend laptop battery life. Dimming the screen, hibernation, not running the CD/DVD drive any more than absolutely necessary when on the battery all will help.

Use the battery at least once every couple of weeks. While this may seem counterintuitive given the instruction to avoid needless recharge cycles, it is not. This is because in order to get the most laptop battery life possible, you must use it. Rechargeable batteries need to be worked or they will lose their effectiveness. This does not mean recharging the battery after every 15-minute use, but rather using the battery for substantially longer periods of time.

For those who rely heavily on their battery, consider a long-life laptop battery. This will help extend times needed between charges, cutting down on the number of recharging cycles needed. While this may seem like an expensive option, it also provides the added benefit, if you already have a standard battery, of having a backup when away from a corded power source.

Watch the use of peripherals, which can substantially drain energy from the battery. If running on battery, use the touchpad instead of the USB mouse. Avoid other such connections as well, as these tend to drain laptop battery life.

Defragment your hard drive regularly. This is a good idea even for desktops, but it is especially important for laptops. Defragmenting should be done when on corded power as it takes a substantial amount of resources to do. However, a properly defragmented hard drive will help keep your hard drive from working harder than it has to when it is on battery power.

Also, don't run unnecessary programs or devices. This will also cause your battery to drain prematurely. When extending the laptop battery life is a concern, this may be one of the easiest things to do. For example, if there is no WiFi access where you are, having that component turned on is nothing but a waste of resources.

|
1

Please turn off your computer at night ...!

Posted by Hany on 9:41 PM in
There are several reasons why it may be a good idea to turn off your computer at night. For some, it may not matter whether the computer is on or off. Most newer computers have a sleep mode when they are inactive which doesn't use much power. However, in businesses or at home, it may be wise to turn off the computer at night for security reasons.

For example, computers connected to the Internet via DSL or cable modem are vulnerable to hacking if they are still connected. You can either turn off the connection or turn off the computer. You may need to keep the computer turned on, conversely, if the computer is used as a fax machine as well.

However, if you work from home and log into a business, you should definitely turn off the computer at night, or at the very least, log out from the business. Not only does leaving the computer on threaten the security of your personal computer, but it could also give hackers entry to your business.

Sleep mode on computers still uses some electricity, and the most economic thing to do is to turn off the computer at night, especially if you're is penny pinching. Saving electricity also has environmental benefits. Older computers may not have the sleep feature, so if you have an older computer you might want to turn it off to save money. Obviously, laptops, which run on a battery, will have a longer battery life if they are turned off when not in use.

You may turn off the computer out of the common misconception that this will protect the computer from power surges. Actually, this is not the case. Even when people turn off their computer, the computer is still vulnerable to power surges if it's not hooked up to a surge protector. Be sure to purchase a good surge protector and do not skimp on money in this case. Find a well rated one that will protect your computer whether off or on from power surges.

Some computer experts suggest that certain programs benefit from getting a break at night, like Windows®. Turning off the computer at night may help eliminate crashes during the day, since the program is rebooted when the computer is turned on again in the morning.

If you aren't concerned about money, computer security, and crashes, then you don't have to turn off the computer at night. But, since at least one of these issues is usually a concern, you might want to turn off your computer at night.

|
1

Bandwidth ?

Posted by Hany on 9:32 PM in

Bandwidth is a term used to describe how much information can be transmitted over a connection. Bandwidth is usually given as bits per second, or as some larger denomination of bits, such as Megabits per second, expressed as kbit/s or Mbit/s. Bandwidth is a gross measurement, taking the total amount of data transferred in a given period of time as a rate, without taking into consideration the quality of the signal itself.

Throughput can be looked at as a subset of bandwidth that takes into account whether data was successfully transmitted or not. While the bandwidth of a connection might be quite high, if the signal loss is also high, then the throughput of the connection will remain somewhat low. Conversely, even a relatively low-bandwidth connection can have a moderately high throughput if the signal quality is also high.

Bandwidth is most familiar to consumers because of its use by hosting companies or internet service providers. The sense in which bandwidth is used by most web hosting companies, that is, as a measure of total data transferred in a month, is not strictly correct. This measurement is more rightly referred to as data transfer, but the use of bandwidth by hosting companies is so pervasive that it has become accepted by the general public.

Many hosting providers place caps on the amount of bandwidth a site can transfer in a given period of time, usually a month, but sometimes twenty-four hours or a week. If the site exceeds its bandwidth allotment, the service is usually either suspended or else additional bandwidth is billed separately, often at a much higher cost than the base cost included with the hosting plan.

Some hosts offer so-called unlimited bandwidth plans, which in theory have an unlimited amount of data transfer per month. Usually the actual bandwidth, that is the per-second transfer of a connection, is somewhat limited on these services, ensuring data transfer for the site never becomes too large. If the bandwidth limit is met, speeds for users may be throttled down substantially, or service may even be interrupted.

Different technologies for connecting to the internet have different bandwidth limits associated with them, as well. These act as an upper limit to how much data may be transferred each second by a user. At the low end of the bandwidth spectrum, a simple dialup connection, using a modem and a normal phone line, has a maximum bandwidth of around 56 kbit/s. In comparison, a DSL connection can reach nearly 10 Mbit/s, or two-hundred times that of a dialup connection, while a cable connection can theoretically reach around 30 Mbit/s. Connections such as a T1 line can reach 1.544 Mbit/s, but given their dedicated nature the actual bandwidth they reach is often higher than cable or DSL. Larger connections include T3 at around 43 Mbit/s, OC3 at 155 Mbit/s, OC12 at 622 Mbit/s, and the monumental OC192 at 9.6 Gbit/s, more than three-hundred times faster than a cable connection as its maximum speed.

Bandwidth is also a limiting factor for the technology that connects the computer itself to the modem or device interacting with the direct internet line. Basic Ethernet, for example, has a bandwidth of 10 Mbit/s, so that using an internet connection faster than that would be largely wasted speed. Fast Ethernet has a bandwidth of 100 Mbit/s, more than fast enough for all consumer uses, while Gigabit Ethernet has a bandwidth of 1 Gbit/s and 10 Gigabit Ethernet is 10 Gbit/s. Wireless technologies are also limited by bandwidth, with Wireless 802.11b featuring a bandwidth of 11Mbit/s, Wireless-G 802.11g having a 54 Mbit/s cap, and Wireless-N 802.11n a blazing 300Mbit/s.

|
1

Why USB 2.0?

Posted by Hany on 9:28 PM in ,

Universal Serial Bus (USB) 2.0 is an external serial interface used on computers and other digital devices to transfer data using a USB cable. The designation “2.0” refers to the standard or version of the USB interface. As of fall 2006, USB 2.0 remains the current standard.

USB is a plug-and-play interface. This means that the computer does not need to be powered off in order to plug in or unplug a USB 2.0 component. For example, an iPod or other MP3 player can be connected to a computer via a USB cable running to the USB 2.0 port. The computer will register the device as another storage area and show any files it contains.

Using the USB 2.0 interface, one can transfer files to or from the MP3 player. When finished, simply unplug the USB cable from the interface. Because the computer does not need to be shut down to plug in the device, USB components are considered “hot swappable.”

Aside from MP3 players, many other external devices use USB 2.0 data ports, including digital cameras, cell phones, and newer cable boxes. Native components also make use of USB, such as mice, keyboards, external hard drive enclosures, printers, scanners, fax machines, wireless and wired networks keys, and WiFi scanners. One of the most popular and convenient USB gadgets is a memory stick.

When USB standards change from an existing version to a newer version, as they did from USB 1.1 to USB 2.0, hardware made for the newer version is in most cases backwards-compatible. For instance, if a computer has a USB 1.1 port, a device made for USB 2.0 that is marked as “backwards compatible to USB 1.1” will work on the older port. However, the device will only transfer data at 1.1 speeds using a USB 1.1 port.

Currently, computers are built with USB 2.0 ports. The USB 2.0 standard encompasses three data transfer rates:

* Low Speed: 1.5 megabits per second, used mostly for keyboards and mice.
* Full Speed: 12 megabits per second, the USB 1.1 standard rate.
* Hi Speed: 480 megabits per second, the USB 2.0 standard rate.

Since USB 2.0 supports all three data rates, a device that is marked as “USB 2.0 compliant” isn’t necessarily hi-speed. It may operate through a USB 2.0 port at one of the slower speeds. Look for clarification when shopping for hi-speed USB 2.0 devices.

|
0

(ICANN) The Internet Corporation for Assigned Names and Numbers

Posted by Hany on 9:14 PM in

The Internet Corporation for Assigned Names and Numbers (ICANN) is a non-profit corporation located in Marina Del Rey, California tasked with managing the logistics of Internet Protocol (IP) addresses and domain names. Created in September 1998, ICANN took over these duties previously served by the Internet Assigned Numbers Authority (IANA). As recently as September 2006 ICANN renewed its agreement with the U.S. Department of Commerce (DOC) to continue in this capacity.

Every computer than connects to the Internet must have a unique address in order to issue requests and receive information within the Internet. When a user logs on to the Internet, the Internet Service Provider (ISP) assigns an IP address. ISPs are assigned blocks of proprietary IP addresses for their use. The IPs they assign to their customers are pulled from these blocks.

In addition to every computer online having a unique address, every website must also have a unique address. The domain name is only used as a convenience because names are easier for people to remember than a string of numbers, but each name maps back to a specific IP address. In the case of wiseGEEK.com, for example, the IP address is 69.93.118.236. Considering the vast number of Internauts and websites, it becomes clear that ICANN has a formidable job in managing the global coordination of this crucial aspect of the Internet.

Over time new domain hierarchies became necessary to answer demand. Thanks to ICANN, standard .com, .net and .org addresses were joined by extensions .info, .name, .tv and .museum among others. The travel industry got its own hierarchy with .travel, and websites dedicated to employment opportunities could populate the .jobs hierarchy. ICANN also manages IPs assigned to government offices (.gov), military (.mil), and country code hierarchies (e.g. .uk).

ICANN operates through a board of stakeholders who regularly meet to discuss policy development to better serve the needs of the Internet. It coordinates resources from a variety of bodies within ICANN that include the Address Supporting Organization (ASO), the Generic Names Supporting Organization (GNSO), and the Country Code Names Supporting Organization (CCNSO). Input also comes from the At-Large Advisory Committee (ALAC), the Security and Stability Advisory Committee (SSAC), the Root Server System Advisory Committee (RSSAC), the Governmental Advisory Committee (GAC), and the Technical Liaison Group (TLG). Each of these bodies within ICANN addresses specific areas required for effective overall management of the global assignment of domain names and IP addresses.

|
1

UNIX

Posted by Hany on 9:11 PM in

UNIX® is a class of operating system developed at Bell Labs in 1969. Today it is owned as a trademark by The Open Group, which oversees its develop and publishes the Single UNIX® Specification. Operating systems which are based on UNIX®, or share many features with UNIX®, but do not comply with the spec, are generally referred to as UNIX®-like.

Generally, UNIX® is seen as the operating system on a workstation or a network server. UNIX® systems formed the backbone of the early internet, and they continue to play an important part in keeping the internet functioning. UNIX® originally set out to be an incredibly portable system, capable of allowing a computer to have multiple processes going on at once, and with multiple users logged in at the same time.

The interactions in early UNIX® systems took place through a text input, and used a hierarchical file storage system. Although UNIX® has changed since its early development, many commands remain the same, and it is largely recognizable today as the same system it was forty years ago. Since 1994, UNIX® has been owned by The Open Group, which purchased it from Novell. The standard continues to develop, and has also had a number of popular offshoots which originated with its core ideals.

The most famous of these is the Linux kernel, which has its beginnings as far back as 1983 when Richard Stallman began the GNU project to try to create a free version of UNIX®. Although the project itself had no success, in 1992 Linus Torvalds produced a free version of the kernel, which he called Linux, and he released it under the GNU license. As a result, while UNIX® remained relatively closed off, Linux was completely open source. This spurred a great number of distributions of the core kernel, including popular ones like Fedora, Ubuntu, and Red Hat.

Although people tend to think of UNIX® as a single operating system, it is actually a broader class of systems that meet a spec. Anyone who has an operating system that meets that spec can use the name UNIX®, assuming they pay the proper licensing fees. A number of existing operating systems could use the mark UNIX® if they so chose, although in many cases this would undermine their own properties.

For example, the Apple OSX system meets the UNIX® spec, and so is strictly speaking a UNIX® system. Similarly, the Solaris operating system is a UNIX® system, as are HP-UX, AIX, Tru64, and IRIX. Operating systems, like Linux flavors, or BSD, which have a great deal in common with UNIX®, but are not technically UNIX® systems because of either a failure to meet the spec, to pay the licensing fee, or both, are often referred to simply as *nix systems. This comes from a practice in UNIX® itself, of using the asterisk as a wildcard symbol, which can stand in for any character. Although technically UNIX®-like systems is the preferred term, it is very rarely seen in place of *nix, *NIX, or ?nix.

|
0

Live CD (Operating System on Fly)

Posted by PinkDolphin101 on 10:49 PM in ,

A Live CD is a bootable compact disk that contains its own operating system (OS). Booting to a Live CD allows a user to try out alternate operating systems without making changes to the computer’s existing OS, hard drives or files. Live CDs, sometimes referred to as LiveDistros are used extensively for various incarnations of the GNU/Linux operating system.

When power is supplied to a computer, the first thing the computer does is process a set of instructions read from the BIOS, or the Basic Input/Output System chip. Settings contained here can be user-modified but typical default settings instruct the computer, among other things, to boot off the hard drive after checking to make sure there is no bootable compact disk in the CD/DVD drive.

If the computer finds a Live CD in the drive, it will boot to that system rather than the installed OS. Once the computer boots to the Live CD the user is free to explore the operating system without compromising the host computer. To get back to the locally installed system, simply remove the Live CD from the CD/DVD drive and reboot the computer, allowing it to boot from the hard drive.

Live CDs are useful for a number of reasons. For one, a user no longer has to invest time and internal storage space to set up a dual boot system in order to try out a new OS. Moreover, many Linux distributions such as Ubuntu are self-sufficient systems with a suite of free built-in programs including a word processor, spreadsheet program, graphics editor and the award-winning FireFox™ Web browser. This means one can take a Live CD to any computer to get work done without invading the desktop, workspace or system of the host computer.

If attempts to boot from a Live CD fail and the system boots to the hard drive, you’ll need to change your BIOS settings. To do this, hold down the Delete key during the beginning moments of the boot process. When the BIOS menu pops up, navigate to the options that will allow you to change the order of bootable devices. Rather than listing the hard disk as the first device to check, change the settings so that the computer will try the CD drive first. These settings can be saved without creating problems. When there is not a Live CD in the drive, the computer will boot from the hard drive.

Various Linux distributions are available online, many free. In some cases you can request a Live CD be sent through the mail, though this might incur charges for materials and shipping. You can also opt to download a Live CD file and place it on a CD yourself. The file will be large, however, and unsuitable for download through low-bandwidth connections such as dial up.

|
0

Torrent (Revolutionof Sharing & Downloading)

Posted by Hany on 10:38 PM in ,

Torrents are specialized files utilized in peer-to-peer (P2P) network environments. P2P is a network of personal computers that communicate with one another by running proprietary P2P software. The first P2P software designed to utilize torrents was BitTorrent by Bram Cohen. Other torrent clients have followed.

Torrents are distinguished by a unique transfer process. To compare how torrents download to standard files, let’s first consider how normal files download off the Internet.

At any given website a user might click on a file to transfer it to his or her computer. Upon clicking on the file, the website’s server starts sending the file to the visitor in discreet data packets. These packets travel various routes to reach the user’s computer and are reconstructed upon receipt to complete the file transfer.

While this works fine for smaller files, it is cumbersome to transfer larger files this way. If the server is busy, download time can be very slow. Communication between your server and the computer can even crash, causing corruption in the transfer, or at best, delays.

Unlike downloads off the Web, torrents do not point to a single source on a P2P network when requesting files. Instead, torrents contain specific information that multiple computers in the network can read to send various parts of the requested file simultaneously and en masse. Torrents keep active track of which parts of the file are needed to complete the request. By downloading bits of the file from dozens, hundreds, or even thousands of sources, large files can download very quickly.

Working with torrents is also unique for another reason. At the same time the user is downloading file parts, the computer is also uploading parts already received to others. This decreases download time because users do not have to wait for file sources to have completed torrents before receiving needed parts of a requested file.

Once requested torrents have downloaded in full, you become a seed for those files. A seed refers to someone that has the entire file available. It is considered rude to download torrents and disconnect, referred to as leeching. Instead, users are encouraged to participate by seeding the file for others so that a minimal 1:1 share ratio is maintained. A swarm refers to the entire group of people transferring a file at any given time.

To encourage sharing, software used for downloading torrents keeps track of the share ratio. The torrent client will automatically allocate more bandwidth for downloading at faster speeds when a user shares more than he or she downloads. This usually means leaving the computer running while doing other things, as upstream bandwidth is much slower for most of us than downstream bandwidth. While it might take 40 minutes to download a 250MB freeware suite, it can take several times longer to upload that same amount of data.

Torrents are archived in libraries that are searchable with a Web browser. One cannot download torrents without installing a torrent client first. There are many free torrent clients available, some of which are open source. Once a desired torrent is found, clicking on it will open the torrent client to begin the download process. The user may have to configure his or her firewall to allow the use of certain communication ports.

Many types of files are shared as torrents, including software, music and videos. While P2P sharing is not illegal, sharing copyrighted material without permission from the copyright holder is illegal. The Recording Industry Artists of American (RIAA), and Motion Picture Association of America (MPAA) have targeted some websites that cater to archiving illegal torrents.

|
1

VPN ( Virtual Private Network )

Posted by Hany on 10:34 PM in ,

VPN, or virtual private network, is a catchall phrase for a variety of networking schemes that allow businesses to use public Internet lines to create a virtual network.

There is no standard model for a VPN, but in general it uses public Internet lines in one of several unique fashions to create a virtual private network. The VPN can operate between branches, regional centers and field representatives via a set of software and hardware protocols that authenticate users and encrypt traffic.

A few types of VPN security include:

Encrypted tunneling: utilizes SSL (Secure Socket Layer) encryption to authenticate users and send information between the remote client(s) and server(s).

IP Security (IPsec): encrypts IP packets like SSL, but can also encrypt UDP (user datagram protocol) traffic, one layer deeper in the network model. UDP traffic accounts for only a small percentage of network traffic, but is used in some key applications like streaming media, and Voice over IP (VoIP).

Point-to-point Protocol (PTPP): Microsoft's VPN protocol, not considered as secure as some others, including IPsec.

Other VPN models include "trusted VPNs," which rely on the third party services of an established network provider. The provider handles all network traffic and guarantees the security of VPN communications. Trusted network structures might use multi-protocol label switching (MPLS), layer-2 forwarding (L2F), layer 2 tunneling protocol (L2TP), or later versions of these protocols, such as L2TP version 3.

A VPN differs from a WAN (wide area network) in that the latter uses leased network lines, thus restricting all traffic to corporate business only. This is effective but costly, particularly when the network must span large distances.

Some companies use intranets or extranets to facilitate 'private' communication. These protocols involve password-protected pages or sites that, ideally, only employees and authorized personnel can access. However, connections between remote users and host servers are not always encrypted, and intranets and extranets are not technically private networks.

The alternate answer is a VPN.

A VPN is expandable, much more cost-effective than a traditional WAN, connects field operators, international offices, affiliated partners or clients, and improves productivity. Assuming care is taken to build a secure VPN, it is a highly beneficial step that can be a tremendous asset to any company with networking needs.

|
0

AJAX as New Innovation

Posted by Hany on 10:20 PM in ,

AJAX is a term used to describe an approach to designing and implementing web applications. It is an acronym for Asynchronous JavaScript and XML. The term was first introduced in an article by Jesse James Garrett of Adaptive Path, a web-design firm based out of San Francisco. He conceived of the term when he realized the need for an easy, sellable way to pitch a certain style of design and building to clients.

The primary purpose of AJAX is to help make web applications function more like desktop applications. HyperText Markup Language (HTML), the language that drives the World-Wide Web, was designed around the idea of hypertext – pages of text that could be linked within themselves to other documents. For HTML to function, most actions that an end-user takes in his or her browser send a request back to the web server. The server then processes that request, perhaps sends out further requests, and eventually responds with whatever the user requested.

While this approach may have worked well in the early days of the Internet, for modern web applications, the constant waiting between clicks is frustrating for users and serves to dampen the entire experience. Users have become accustomed to blazing-fast responses in their desktop applications and are unhappy when a website can’t offer the same immediate response. By adding an additional layer between the user interface and the communication with the server, AJAX applications remove a great deal of the lag between user interaction and application response. As AJAX becomes more common in popular web applications, users become more and more accustomed to this immediate response, helping to drive more businesses to adopt AJAX methodologies.

An AJAX application consists of a number of applications used in conjunction to create a more seamless experience. This includes Extensible HTML (XHTML) and Cascading Style Sheets (CSS) for building the underlying page structure and its visual style, respectively; some sort of interaction suite using the Document Object Model; data manipulation using Extensible Markup Language (XML); data retrieval using XMLHttpRequest; and JavaScript to help these different elements interact with one another. AJAX is spreading quickly throughout the web, with examples visible at many major sites. Google Maps, for example, in many ways epitomizes the ethos of the AJAX model, with its complex functionality and virtually seamless interactivity.

Like most emerging philosophies of web development, AJAX has its share of detractors. One commonly leveled argument against AJAX is that in many cases it breaks some expected functionality, such as the use of the Back button, causing confusion. While some fixes exist for many of these breaks, they are rarely implemented to the extent that the behavior of an AJAX application conforms to the expected behavior of the larger browser.

|
2

BIOS (Basic Input/Output System)

Posted by Hany on 10:07 PM in

A BIOS (Basic Input/Output System) is an electronic set of instructions that a computer uses to successfully start operating. The BIOS is located on a chip inside of the computer and is designed in a way that protects it from disk failure.

A main function of the BIOS is to give instructions for the power-on self test (POST). This self test ensures that the computer has all of the necessary parts and functionality needed to successfully start itself, such as use of memory, a keyboard and other parts. If errors are detected during the test, the BIOS instructs the computer to give a code that reveals the problem. Error codes are typically a series of beeps heard shortly after startup.

The BIOS also works to give the computer basic information about how to interact with some critical components, such as drives and memory, that it will need to load the operating system. Once the basic instructions have been loaded and the self-test has been passed, the computer can proceed with loading the operating system from one of the attached drives.

Computer users can often make certain adjustments to the BIOS through a configuration screen on the computer. The setup screen is typically accessed with a special key sequence during the first moments of startup. This setup screen often allows users to change the order in which drives are accessed during startup and control the functionality of a number of critical devices. Features vary among individual BIOS versions.

Many PC manufacturers today use flash-memory cards to hold BIOS information. This allows users to update the BIOS version on computers after a vendor releases an update. This system was designed to solve problems with the original BIOS or to add new functionality. Users can periodically check for updated BIOS versions, as some vendors release a dozen or more updates over the course of a product's lifetime. To check for an updated BIOS, users can check the website of the specific hardware vendor.

|
1

How to get Maximum Performance from Dual Core Processor

Posted by Hany on 12:03 AM in ,

A dual core processor is a CPU with two separate cores on the same die, each with its own cache. It's the equivalent of getting two microprocessors in one.

In a single-core or traditional processor the CPU is fed strings of instructions it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed the bus, RAM or storage device will allow, which is far slower than the speed of the CPU. The situation is compounded when multi-tasking. In this case the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted and performance suffers.

In a dual core processor each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. Now when one is executing the other can be accessing the system bus or executing its own code. Adding to this favorable scenario, both AMD and Intel's dual-core flagships are 64-bit.

To utilize a dual core processor, the operating system must be able to recognize multi-threading and the software must have simultaneous multi-threading technology (SMT) written into its code. SMT enables parallel multi-threading wherein the cores are served multi-threaded instructions in parallel. Without SMT the software will only recognize one core. Adobe Photoshop is an example of SMT-aware software. SMT is also used with multi-processor systems common to servers.

A dual core processor is different from a multi-processor system. In the latter there are two separate CPUs with their own resources. In the former, resources are shared and the cores reside on the same chip. A multi-processor system is faster than a system with a dual core processor, while a dual core system is faster than a single-core system, all else being equal.

An attractive value of dual core processors is that they do not require a new motherboard, but can be used in existing boards that feature the correct socket. For the average user the difference in performance will be most noticeable in multi-tasking until more software is SMT aware. Servers running multiple dual core processors will see an appreciable increase in performance.

Multi-core processors are the goal and as technology shrinks, there is more "real-estate" available on the die. In the fall of 2004 Bill Siu of Intel predicted that current accommodating motherboards would be here to stay until 4-core CPUs eventually force a changeover to incorporate a new memory controller that will be required for handling 4 or more cores.

|
4

Multi-Core Processor Advantages & Disadvantages

Posted by Hany on 11:43 PM in ,

A multi-core processor is a processing system composed of two or more independent cores (or CPUs). The cores are typically integrated onto a single integrated circuit die (known as a chip multiprocessor or CMP), or they may be integrated onto multiple dies in a single chip package. A many-core processor is one in which the number of cores is large enough that traditional multi-processor techniques are no longer efficient — this threshold is somewhere in the range of several tens of cores — and likely requires a network on chip.

A dual-core processor contains two cores, and a quad-core processor contains four cores. A multi-core processor implements multiprocessing in a single physical package. Cores in a multi-core device may be coupled together tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared memory inter-core communication methods. Common network topologies to interconnect cores include: bus, ring, 2-dimensional mesh, and crossbar. All cores are identical in homogeneous multi-core systems and they are not identical in heterogeneous multi-core systems. Just as with single-processor systems, cores in multi-core systems may implement architectures such as superscalar, VLIW, vector processing, SIMD, or multithreading.

Multi-core processors are widely used across many application domains including: general-purpose, embedded, network, digital signal processing, and graphics.

The amount of performance gained by the use of a multi-core processor is strongly dependent on the software algorithms and implementation. In particular, the possible gains are limited by the fraction of the software that can be "parallelized" to run on multiple cores simultaneously; this effect is described by Amdahl's law. In the best case, so-called embarrassingly parallel problems may realize speedup factors near the number of cores. Many typical applications, however, do not realize such large speedup factors and thus, the parallelization of software is a significant on-going topic of research.

Advantages

The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock rate than is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher quality signals allow more data to be sent in a given time period since individual signals can be shorter and do not need to be repeated as often.

The largest boost in performance will likely be noticed in improved response time while running CPU-intensive processes, like antivirus scans, ripping/burning media (requiring file conversion), or searching for folders. For example, if the automatic virus scan initiates while a movie is being watched, the application running the movie is far less likely to be starved of processor power, as the antivirus program will be assigned to a different processor core than the one running the movie playback.

Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less Printed Circuit Board (PCB) space than multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the decreased power required to drive signals external to the chip. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.

Disadvantages

In addition to operating system (OS) support, adjustments to existing software are required to maximize utilization of the computing resources provided by multi-core processors. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications. The situation is improving: for example the Valve Corporation's Source engine, offers multi-core support, and Crytek has developed similar technologies for CryEngine 2, which powers their game, Crysis. Emergent Game Technologies' Gamebryo engine includes their Floodgate technology[3] which simplifies multicore development across game platforms. See Dynamic Acceleration Technology for the Santa Rosa platform for an example of a technique to improve single-thread performance on dual-core processors.

Integration of a multi-core chip drives production yields down and they are more difficult to manage thermally than lower-density single-chip designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. If a single core is close to being memory bandwidth limited, going to dual-core might only give 30% to 70% improvement. If memory bandwidth is not a problem, a 90% improvement can be expected. It would be possible for an application that used two CPUs to end up running faster on one dual-core if communication between the CPUs was the limiting factor, which would count as more than 100% improvement.

|
0

Best Small Laptops (Sony VAIO VGN-TZ11MN)

Posted by PinkDolphin101 on 4:51 PM in ,

Screen Size: 11.1”
Battery Life: 5-6 Hours
Operating System: Windows Vista Home Premium
Weight: 2.6lbs

Sony's offering to the world of ultrathin laptops isn't quite as visually attractive or unique as their 'netbook' offering but it's one of the best out there in terms of overall features including one of the most commonly excluded features on most small laptops: a fully capable DVD drive.
The design on the TZ1 isn't exactly ugly; it just doesn't stand out amongst its competitors in the way people who are familiar with the Vaio brand might be used to. One of its more unique design features is the keyboard, which features spaced out keys similar to those seen the Macbook. This style was actually first created by Sony themselves nearly five years ago. It's not a bad keyboard although many people find the spacing between keys more annoying than practical.

The 11" screen is without doubt one of the best around, running in a 1366 x 768 resolution which is commonly seen in HDTV. This makes it the only system in the list to achieve a real 16:9 ratio making it perfect for watching films. And of course, watching DVD movies is something you can actually do with ease on the TZ1 compared to the vast majority of competitors that offer, at best, an external DVD drive option. In fact Sony goes a step further and provides DVD and CD playback on the system without even having to boot up windows, saving considerable battery life on a task that's well known for quickly draining power.

The system boasts full AV media controls as well as Firewire and 2 USB ports. There’s integrated Bluetooth support but despite an express card slot, there's no integrated mobile broadband support.

The Core 2 Duo that powers the system runs at 1.06 GHz per core and provides enough juice to sufficiently power the system although it’s slightly slower compared to some of the newer small laptops on this list. However, base installs are let down by including only 1 GB of ram, not enough to run Vista without slowdown. Luckily the machine supports up to 2 GB and most stores are selling the system as such. Despite Vista, machine battery life is excellent giving about 6 hours of power under heavy usage.

An amazing screen and internal DVD drive set the VAIO apart from its competitors, so if you need a small laptop that's capable for business use but also performs well as a media system, the TZ1 could be a very good choice. The TZ1 is one of the older systems on our list so prices have fallen since launch, averaging at around $1600.

Conclusion:
As we've seen already there are two major problems in the world of small laptops. Firstly, to make them as small and portable as possible features are eventually axed, be it DVD drives or faster CPU speeds. Secondly, the price of components makes many small laptops a particular sticking point for many, especially during a recession.

Luckily the market has expanded considerably in the last few years giving consumers not only an excellent array of different sizes and styles of laptops but also a massive variation in budget, from as low as $300 to as high as $2,000. People might argue over the system we have or haven’t included in our list but one thing is for sure: there has never been a better time to take the leap into portable computing than right now.

|
0

Best Small Laptops (Dell Latitude E4200)

Posted by PinkDolphin101 on 4:49 PM in ,

Screen Size: 12.1”
Battery Life: 4 Hours
Operating System: Windows XP / Windows Vista Home Basic / Windows Vista Business
Weight: 2.2lbs

Dell has recently released an incredibly lightweight and portable system named the Adamo, which boasts some fairly impressive design detail and interesting specifications. The reason it isn't in this list is because it's priced ridiculously high even for a small laptop. For those looking towards Dell, the latitude remains as ever the best choice between value and performance, and the E4200 is no different.

The E4200 is slightly more of a professional system than some of the others we've looked at, and this ethos is echoed in the systems design. It's bold, practical and refreshingly angular in a world that's obsessed with curves. It's also one of the most solid systems we've seen and at 12" this is quite an accomplishment. It may not hold up to companies like Lenovo, Sony or Apple when it comes to style, but it's not ugly either. The only stumbling point some people may have is that the systems 6-cell battery sticks out from the back of the system quite a long way. It sounds worse than it looks, though!

For a 12" laptop it's quite surprising to not see an optical drive installed, and the options provided by Dell are expensive and not including in the base price of the system, which will be a turn off to some. There's also no integrated webcam which is a standard for most small laptops released today, although perhaps not a necessity for most users. It's an interesting thing to exclude considering the business focus of the E4200, but a webcam wouldn't have fit amongst the bezel on the top of the system.

The E4200 features a high quality full sized laptop keyboard and a responsive touchpad, not surprising considering solid and tactile controls have become a trademark of the modern Latitude range. The system is very configurable so if you want to save money by removing features such as mobile broadband you've got the option.

One of the more unique features of the E4200 is its secondary OS. Yes, it’s not uncommon in small laptops, but it's usually reserved for the much smaller netbooks and it's refreshing to the see the OS in a larger system. Much like most instant-boot secondary systems, Dell's 'Latitude ON Reader' provides a much longer battery life than Vista could hope for while giving users quick access to the internet, instant messaging and document viewing.

Battery life on the system manages around 4 hours on heavy load, quite an impressive result considering the size and brightness of the screen. The SSD only drive makes the system speedy even using a comparatively slow 1.4 GHz CoreDuo CPU. Combined with its sturdy design and excellent software options the E4200 is a stunning choice, provided you can live without a DVD drive. Prices range from $1000 - $2000 depending on the wide variety of options available.

|
1

Best Small Laptops (Lenovo IdeaPad U110)

Posted by PinkDolphin101 on 4:48 PM in ,

Screen Size: 11.1”
Battery Life: 3 Hours / 1.5 Hours (depending on battery)
Operating System: Windows Vista Home Premium
Weight: 2.42lbs / 2.92lbs

The Lenovo is perhaps the PC world's answer to the Macbook Air. It's incredibly stylish, thin and lightweight. Its 11" screen strikes a good balance between screen size and weight. Its size makes the system one of the smallest 'ultrathin' laptops, only an inch bigger than the larger netbooks. Despite weighing only 3.1 pounds with its 4-cell battery included, the U110 is one of the most sturdy feeling machines around thanks to its aluminium cover and case.

With the size comes an interesting 1366x768 native resolution which allows the laptop to fit an impressive amount of onto its small screen. The display is sharp and bright. Unfortunately the screen is very reflective much like many of its competitors, so while it looks great at the right angle, it's far from ideal for using in direct sunlight. 1,366 is also a high resolution for an 11" screen so everything appears quite a bit considerably smaller than you might be used to on other small laptops. Despite all this, the overall design and clarity of the screen make up for any real shortcomings of the system.

The Lenovo’s option to use a low-voltage version of Intel’s trademark Core 2 Duo chip means the system isn't as fast as some of its competitors, but it still outweighs netbooks by a long way. This is quite an impressive feat considering the size of the system which manages to run Vista with very few problems. There's also plenty of connectivity on the U1110 with three USB ports, mini firewire and an express card slot. The only major drawback on the connectivity side is the same as those faced by Macbook Air users - no internal DVD drive. The difference with the Lenovo is that a DVD drive is provided within the cost of the system rather than as a separate accessory.

Battery life on the U1110 is quite unique. The Lenovo actually ships with two batteries, a lightweight 4-cell and a heavier but much longer lasting 7-cell. The larger battery lasts a full 3 hours but the 4-cell only manages around 1.5 on heavy load. The difference between the two batteries means you are likely to find yourself using the 7-cell far more often. However, it's an excellent design decision to give the user some choice when it comes to how heavy the system is.

There's not a lot of choice outside the world of batteries however, the U110 comes in a single configuration with all the advantages and drawbacks that brings. Finding drivers for the system is much easier because there’s only one set of hardware but it means you can't include many of the higher-end options you might see on many of the systems competitors. There's no option to replace the 120GB spinning drive with a solid state alternative for example.

The Lenovo weighs in at around $1300 and there are some options to upgrade the 2GB ram higher if you find the right store. The Lenovo is a system that takes the great looks of the Macbook Air and hands it to Windows users for nearly half the price. However, the lack of DVD drive and use of a slower Intel chip make it slightly less functional than a larger more complete system.

|
2

Best Small Laptops (Apple Macbook Air)

Posted by PinkDolphin101 on 4:47 PM in

Screen Size: 13.3”
Battery Life: 2.5 Hours
Operating System: Mac OS X
Weight: 4.5lbs

The Air is alone on this list for being a small laptop of slightly different dimensions. It's actually a 13" system making it the biggest we've looked at so far. What makes it still qualify as a small laptop? It's incredibly thin and light, ideal for people that travel often. You may well have seen the high profile Mac adverts which involve putting the laptop into envelopes and other such small spaces. It's also incredibly sexy compared to just about every other 13" laptop around. Do the looks justify the price, though?

The 13" screen on the Air is a real head turner. Combined with its full sized and spacious keyboard it's got none of the problems of mistyped words or lack of screen real estate as many of its competitors do. It's also uses an Intel Core 2 Duo rather than the much slower Atom CPUs used in other small laptops.

The Air has been hailed by Apple as a 'no compromise' portable machine, although it's clear that have been a fair amount of compromises in apples quest to provide a 13" system that isn't much more difficult to carry around than a standard netbook. Power will be definite issue for some with the laptop lasting just over 2 hours on heavy use. Luckily, the power adapter doesn't add much more weight to the laptop. Providing you aren't going to be away from a power source for long it's fine. It does limit the systems portability somewhat though.

There's also rather limited audio capacity, awkward USB ports and no DVD drive. Netbooks might not include DVD drives either but it's worth noting that an equivalent netbook would be far, far cheaper than that Air. Shortcomings aside however, what most people are looking for in the Air isn't a long list of technical specifications - they're looking for a Mac that offers them a similar level of functionality as the rest of the small laptop world but with software they are familiar with.

The Air uses Leopard as its primary operating system and it's just as usable and attractive as it is on a full size Mac. There are no significant changes to the system except for a gesture based control system using the Air's generously sized touch pad. The gestures allow you to move your hand in a certain way to complete a task and could save a lot of time once you've properly learned them all. Also included is a 'remote disk' application which allows you to use a limited form of DVD sharing from a host computer. This is a good option for those not willing to pay the extra $100 for the DVD drive attachment. The software is intelligent and useful for installing applications or reinstalling the operating system, but it can’t be used for streaming content like DVD movies.

The best and worst thing you can say about the Air is simple; it's a Mac. Most people will instantly love or hate the machine just because of this. Its price and hardware offer little to make PC fans consider the Air as an option. However, its sleek design and use of Leopard as its primary operating system will entice Mac users who are looking for a more portable option the larger and heavier Macbook Pro range. Prices range from $1400 - $2000 depending on the model.

|
0

Best Small Laptops (HP Pavilion Dv2)

Posted by PinkDolphin101 on 4:46 PM in ,

Screen Size: 12.1”
Battery Life: 3 Hours
Operating System: Windows Vista Home Premium
Weight: 3.8lbs

With an impressive technical list that sounds more like a regular sized laptop but a small form factor and light weight to rival a small laptop, the DV2 is HP's answer to the problem of size vs. power. For many people who are turned off by the idea of small laptops, it could well be the solution if you are looking for a system capable of demanding tasks that won’t break the bank or your back.

A standard Dv2 features some pretty nice specifications for its form factor, far in advance of anything seen amongst the 10" netbooks. The standard Intel Atom processor is replaced by a similar speed Athlon Neo, 4GB ram and ATI Mobility HD 3410 graphics. Unlike most small laptops, the system runs on a 64 bit version of Windows Vista. It also features all the usual standard options of a regular 14" laptop such as DVD-RW drive and a 320GB hard drive. There's plenty of room for manoeuvrability and customisation in the design although obviously things can get quite expensive with the higher-end options.

The DV2 is a departure in design for HP moving away from the standard pavilion look and feel. This could be seen as a bad thing considering pavilion range is well known for its excellent design. Luckily everything feels just as smooth and compact as ever. Some things haven't changed though, the DV2 offers the same catch less design as other trademark HP laptops. Intelligently, the 6-cell battery is actually hidden at the back as a hinge for the screen.

The keyboard is standard laptop size and with the exception of the fiddly function keys, very easy and comfortable to use even for extremely long periods of time. Even people with bigger hands should have no problem at all with the keys which feel solid and responsive. The touchpad is equally well designed although its reflective surface feels a little slippery when compared to some other models.

As with most models in the pavilion range, the screen is absolutely excellent with sunlight reflection and a limited viewing angle being the only concerns. For the most part, the glossy screen and it's 1280x800 resolution are perfect for everyday use.

For connectivity Wi-Fi and Ethernet are supported as standard as well as three USB ports, similar to what you'll find on a well fitted small laptop. There's also an external HDMI to complement the VGA port, great for people interested in connecting the laptop up to a TV that doesn't support VGA. For extra battery life, wireless is controllable via a switch on the side of the system.

The overall speed of the system is let down slightly by a single core CPU that the 4GB of ram can't compensate for, even on a 64bit operating system. Comparing it to the same speed Intel Atom however the Athlon processor is considerably faster. As far as overall performance is concerned, the Dv2 falls into the useful category of being faster than a netbook but not quite as fully featured as a similar priced full sized laptop.

The 512MB graphics card also makes the Dv2 a good choice for gamers, with the system easily capable of running fairly modem games such as Bioshock on medium-low settings on the Dv2’s native resolution. Price wise, the system will set you back around $750-$800, depending on options. It's an excellent deal if you need to combine portability with power and other small laptops simply don't offer enough for you.

|
0

Best Small Laptops (Samsung NC10)

Posted by PinkDolphin101 on 4:45 PM in ,

Screen Size: 10.2”
Battery Life: 5 Hours
Operating System: Windows XP
Weight: 2.8lbs

Samsungs entry to the market is the exact opposite of some of the niche systems we've already run through, sharing more in common with the EEE PC line - it's powerful, small and focuses on doing simple tasks as well as possible. Thanks to this focus on simplicity, the NC10 remains an excellent contender despite doing nothing 'special'.

The Samsungs design is piratical and uses a 10" screen. From the outside it looks like the epitome of a standard small laptop. It's not stylish but nor is it over-the-top, and it could easily be mistaken for just about every other non-descript small laptop on the market.

One of the ways the NC10 does differ from other models is that there are no options in terms of changing any of the specifications. There's no Linux version, no different CPU speeds, no option for a solid state drive. It offers a standard Intel Atom N270 configuration running at 1.6 GHz, 1GB of ram and a 160GB hard drive.

Control on the system is hit and miss. The 10" size has made the keyboard large and easy to use even if it's not the best on offer. However, the touchpad is awkwardly placed and oddly unresponsive. Worse still, Samsung have taken the rather odd design decision to make a 'widescreen' style touchpad which makes vertical scrolling a somewhat ardous task.

The systems Matte display runs at the 1024x600 and looks appealing and crisp, although it would be hard to differentiate between the NC10 and offerings from ASUS or Dell in terms of pure screen quality. Sound offers a similar 'adequate' quality without providing any real power. The integrated 1.3MP webcam however does stand out as one of the best in the world of small laptops and combined with a decent microphone it's a great system for talking to friends over the net.

Battery life is one of the few absolute standouts of the NC10, featuring a 6-cell standard battery that even beats out contenders like the EEE PC. Depending on the tasks and the brightness of the screen, the system can last for around 5 hours. The NC10 is also priced perfectly; at around $450-$500 - slightly cheaper than many of its direct competition.

|

Copyright © 2009 Technology Right Now All rights reserved. Theme by Technology Stuff. | Bloggerized by Technology Staff.