Friday, January 1, 2010

PC (Computer) terms explained

Posted on January 02 at Learning & eBooks

THE BASICS

Bit
In terms of electronic information, or computer data, nothing is smaller than the ‘bit’, an abbreviation of ‘binary digit’. These ones and zeros represent the lifeblood of a computer, buzzing between the processor, memory chips and data-storage devices, such as the hard disk.

In transit, bits are represented by nothing more than brief electronic pulses that zip through the various components on a computer’s motherboard. Bits can, however, have a physical presence too.

On a CD, for example, bits are represented as minuscule pits etched onto the disc’s surface. When these are ‘read’ by a CD-Rom drive’s laser beam, they’re converted into the electrical pulses that a computer can understand.

Obviously, a solitary bit isn’t particularly useful but, when strung together, they can represent numbers using a binary system (or base two – decimal is base 10).

Everything in a computer is represented as a binary number and everything a computer does is done by performing calculations on binary numbers. Thankfully, you don’t need to know anything about bits and binary to use a computer, but an understanding of bigger collections of bits is useful.

Byte
Eight bits make a byte and a byte is the smallest collection of bits that a computer can work with. If you know your binary, you’ll know that a byte can represent any decimal number from 0 to 255.

Bytes are also used to represent letters, numbers and other symbols using an arcane system known as ASCII so that when you type the letter A on your keyboard, the computer records it as the ASCII code 65 – represented in binary as 01000001 – which is one byte of data.

Kilobyte (Kb)
Even though it contains eight bits, a byte still isn’t much use alone and it’s only when bytes are grouped together that a computer can do something meaningful with them.

Perhaps the smallest practical measurement of computer data storage is the kilobyte, which consists of 1,024 bytes. (Note that in the wider world of measurements, the ‘kilo’ prefix equates to 1,000 but as computers work in binary, 1,024 is a more workable multiple.)

Computeractive always abbreviates kilobyte as Kb but you might see it elsewhere as KB and even referred to verbally as simply ‘k’.

Since single characters on a computer are represented as bytes, it follows that a kilobyte can represent 1,000 characters (including spaces – a computer doesn’t know something is there unless it keeps a record of it).

That is about the same number of characters as there are in this definition of ‘kilobyte’. This whole feature contains around 20,000 characters, which equates to about 20Kb.

Megabyte (Mb)
Moving up the scale and we encounter the megabyte – Mb, as we always call it but you may see it written as MB or spoken as simply ‘meg’. The megabyte follows the binary-multiple convention explained above, signifying not 1,000 but 1,024 kilobytes.

There’s often confusion about the megabyte, thanks to a number of hardware manufacturers who insist on reverting to small print in order to redefine the abbreviation ‘Mb’ as meaning ‘a million bytes’.

Companies that intentionally exploit this ambiguity effectively short-change their customers, as a megabyte is technically 1,048,576 bytes, not 1,000,000.

Regardless of this marketing ploy, every computer in the world works – and will continue to work – its sums in binary, so your PC will always tot up megabytes as 1,024 kilobytes. Computer memory is generally measured in megabytes, with 512Mb being a typical complement on new PCs.

Gigabyte (Gb)
Time was when hard drive storage capacities were measured in tens and later hundreds of megabytes. Technology moved on though, and modern hard disk capacities are beyond the point where megabytes are a practical measure, hence the arrival of gigabyte.

The gigabyte (or Gb as we put it, GB as some put it and ‘gig’ as just about everyone calls it) is a binary multiple of a megabyte – 1,024 of them to be precise. If you’ve detected the trend, you can probably work out the size of a terabyte (Tb), the measurement next up on the scale from gigabyte.

Sadly, the abbreviated form of the measurement (Gb/GB) suffers from the same ambiguity described previously, inheriting the ‘millions of bytes’ definition from manufacturers who like to mislead and amplifying storage potential discrepancies further.

A manufacturer who insists on calling a gigabyte a thousand megabytes is selling you short, since its 80Gb hard disk – 80,000Mb – is really only a 76Gb model.

Kilobit (Kbit)
As if to complicate matters further, when it comes measuring the quantity (and/or speed) of data transfer between PCs (and/or components) binary conventions are thrown right out of the window.

The ‘kilo’ in kilobit (abbreviated to Kbit in Computeractive) actually does mean 1,000 and not 1,024. In other words, despite all we’ve told you so far, you must simply accept that a kilobit is 1,000 bits.

A modem, for instance, offers a theoretical maximum download speed of 56Kbit/s, meaning up to 56 kilobytes (56,000 bits) of data come down the phone line each second. Get out your calculator and you should be able to work out that this is equivalent to 6.9Kb/s (kilobytes a second).

Megabit (Mbit)
When kilobit isn’t an ample measure for the quantity of data at hand, the megabit steps in. Again, forget convention: a megabit really is a million bits and it’s more often used to describe fast data transfer speeds, such as those used by hard disks or networks.

The cabling used in a typical office network, for example, can send and receive up to 100Mbit (the abbreviation for megabit) of data each second.

It’s all too easy too confuse megabit with megabyte (and similarly, kilobit with kilobyte) but they differ by an order of magnitude.

If you’ve been following the figures carefully, you’ll be able to calculate that one megabit (1Mbit) is the same as 122Kb and that one megabyte (1Mb) is 8,388,608 bits or 8.4Mbits.

SPEEDS

Hertz (Hz)
Hz is short for Hertz and that’s the surname of one Heinrich Rudolf, the German physicist after whom the metric unit for measuring radio and electrical frequencies is named.

One hertz (1Hz) means one cycle or oscillation (for a radio wave) a second but computer users will recognise the term from their monitors. CRT monitors work by refreshing their screen image many times a second and this ‘refresh rate’ is measured in Hz.

A monitor described as offering an 85Hz refresh rate, for example, redraws its image 85 times each second, making it appear flicker-free to most eyes.

Megahertz (MHz)
1Hz is one cycle a second, 1MHz is one million cycles a second. It’s impossible to picture something happening one million times a second but, when it comes to computer processors, 1MHz is nothing.

A processor works to the tick of an internal clock and typically performs one calculation on every tick. How often the clock ticks (the ‘clock speed’) determines the speed of the computer and that speed is measured in megahertz.

The first PCs had a clock speed of 4.77MHz, 4.77 million clock ticks per second. The latest Pentium 4 PC has a clock speed of 3.2GHz (see below for a full explanation of GHz), or 3,200,000 ticks per second. That’s very fast.

One final note about clock speeds and MHz. Clock speed is exactly that: the speed of a processor’s clock and it isn’t always a good guide to how ‘fast’ a processor is.

All processors perform complex calculations by carrying out several simple sums in succession. Some processors are more efficient at this than others, which means they take fewer clock ticks to do the same sum as a less efficient processor.

The upshot is that it’s possible for one processor to be quicker at doing something than another, despite having a lower clock speed – less MHz or GHz, in other words.

Gigahertz (GHz)
In the computer world, the abbreviation for gigahertz (GHz) has two meanings. When used in relation to computer processors, it equates to 1,000MHz (see above).

As all new processors run at clock speeds much higher than would be practical to describe in MHz, GHz is used instead, ‘3.2GHz’ being simpler to convey than ‘3,200MHz’.

Elsewhere, GHz is used to define the area of the radio spectrum used for wireless networking technologies, such as Bluetooth (2.4GHz) and Wi-Fi (2.4GHz and 5GHz).

CD-RW drive speeds
With manufacturers persistently striving to outdo one another by coming up with ever-faster drives, the numbers used to signify the performance of CD-RW drives are a common cause for confusion.

CD-RW (as well as CD-Rom and CD-R) drive speeds are all based on the speed of the very first drives, which read data from discs at a leisurely 150Kb/s.

These ’single-speed’ drives soon became double-speed (300Kb/sec), then quad-speed, and at the moment the fastest drives are 52-speed (52 times faster than a single-speed drive, or 7,800Kb/sec).

When it comes to speeds for CD-R and CD-RW use, most manufacturers stick to the convention of read speed x write (CD-R) speed x rewrite (CD-RW) speed, such as 48 x 32 x 16.

DVD drive speeds
When it comes the performance of DVD reading and writing drives, the figures and conventions are much more complicated.

The key thing to keep in mind is that a single-speed DVD-Rom drive is much faster than a single-speed CD-Rom drive – 1,385Kb/sec, compared with 150Kb/sec. Most new DVD drives are 16-speed models, 16 times faster than a single speed drive, or 22,160Kb/sec.

Unfortunately, unlike the CD-RW drive market – and thanks to a number of competing recordable DVD standards – there isn’t, as yet, a single convention for the order in which the speed multiples are listed. In other words, you can’t deduce at a glance a drive’s read, write and rewrite potential.

Adding to the confusion is the fact that most recordable DVD drives can write to CD-RW and some newer models can even deal with more than one recordable DVD standard.

It’s not uncommon for fancier drives to list half-a-dozen or more speed statistics, relating to read/write/rewrite ability on the various recordable DVD formats, and again for CD media.

Hard disk speeds (rpm)
When used in relation to hard disk technology, the abbreviation rpm (revolutions per minute), signifies the spin speed of a drive’s data-storing magnetic platters.

Obviously, the higher the rpm, the faster the disk platters spin and this has an effect on the data transfer rate, i.e. how fast data can be read from and written to the disk.

A typical PC hard disk spins at 5,400rpm and, for most purposes, this is more than adequate. Faster 7,200rpm hard disks are available for a small price premium and these will give a small but noticeable performance increase.

Still faster 10,000rpm hard disks are also available but, for the moment at least, these cost a princely sum.

Printer speed (ppm)
Printer manufacturers describe the performance of their wares in pages per minute, or ppm for short.

In simple terms, this describes how many pages a printer can print in one minute: the higher the figure, the faster the printer. Actual printer performance seldom lives up to quoted performance but most printers won’t be far off.

Unfortunately, ppm is really only a useful measure of a printer’s performance for text printing. Add graphics to a page, for example, and print speed will drop, as printing graphics is more involved than printing text.

Move to printing photo-quality images on an inkjet printer and ppm ceases to be a useful measure of performance. Instead, photo print speeds are usually measured in minutes.

Whatever the printer though, manufacturers hardly ever factor in the time it takes for the first page to be printed: the time between you pressing ‘print’ in an application and the first page appearing from the printer.

This largely depends on the speed of the computer but, for complex documents, it can increase print times significantly.

Frames per second (fps)
In relation to games or video, frames per second (fps) refers to how often a graphics card updates the still images displayed on the monitor to give the impression of a moving image. The higher the fps (or ‘frame rate’, as it’s often called), the smoother on-screen motion will look.

The frame rate of a game is most dependant on the graphics card. The graphics card is responsible for calculating the new positions of any moving objects, as well as working out such things as which objects can’t be seen because they’re hidden behind another and even such things as shading based on ambient light conditions.

A game with just a few simple objects will run at a high frame rate on just about any graphics card. Switch to a complex 3D game that contains lots of highly detailed objects, however, and frame rates will plummet on all but the most powerful graphics cards.

One way to boost the frame rate in such instances is to drop the resolution of the game, since lower resolutions result in less complex objects, which means less work for the graphics card.

With video, fps is a measurement of how many still frames are displayed each second and, again, this largely depends on the power of the graphics card.

What constitutes an acceptable fps figure is highly subjective. Games running at below 15fps are usually unacceptably jerky, but anything above 30fps will seem smooth to all but the most demanding gamer.

QUALITY

Dots per inch (dpi)
The output quality of printers and the image capture ability of scanners are defined in terms of dots per inch (dpi) – a straightforward enough term, or so you’d think. In fact, dots per inch can be a wholly meaningless phrase.

That a particular printer can squeeze out so many dots of ink in a given inch is no indication of the quality of the output (the variety of ink and the size and separation of the ink droplets all contribute to the look of a printed page).

Similarly, many scanners capture far fewer dots per inch of image information than their quoted resolutions suggest. A scanner with a claimed 9,600dpi resolution, for instance, might capture just 600dpi of image information – this is the true ‘optical’ resolution.

The higher resolution figures are usually the result of interpolation, mathematical formulae that are applied to the captured data to increase the number of pixels in the final image.

Finally, dpi figures for printers and scanners are often referred to as ‘resolution’. As you’ll see from the definition of screen resolution below, monitors, printers and scanners all work with images composed of dots of colour called pixels, so there is some justification for using resolution for all three types of technology.

Screen resolution
A computer’s graphics card determines the level of detail with which images will be displayed on the attached monitor. The measurement for this detail is called ‘resolution’.

The pictures you see on screen are in fact made up of thousands (or even millions) of tiny coloured dots – picture elements or ‘pixels’ for short.

A graphics card that can output a resolution of 1,600 x 1,200 pixels, for example, will produce 1,920,000 pixels on the display – 1,600 across the screen and 1,200 down it.

It’s worth noting that the electron guns in traditional CRT monitors generate a spread of resolutions, while TFT monitors are more restrictive. TFT monitors are manufactured with fixed grids of picture elements, sometimes referred to as the display’s ‘native’ resolution.

A TFT monitor with a native resolution of 1,024 x 768 pixels will be unable to display images with resolutions in excess of this figure, even if the graphics card is capable of generating them at a higher resolution.

Moreover, while most TFTs can run below their native resolution, the displayed images may look distorted.

Point size
A ‘point’ (or ‘pt’) is measurement of the size of a particular font when printed. In modern publishing it amounts to precisely 1/72 of an inch – 0.0138in (or 0.35mm). Therefore, a line of text printed at a size of 12pt will occupy a space that is 4.2mm (12 x 0.35mm) high.

While individual characters will be shorter than this figure, the measurement includes the room occupied by the ascenders and descenders of characters, like ‘f’ and ‘p’.

For example, printed at point size 12, a lowercase ‘j’ – a character that exhibits both ascender and descender – will measure precisely 4.2mm in height.

Megapixel
The megapixel measurement is most often used in relation to digital cameras. It translates literally as ‘one million pixels’ and is a useful guide to a camera’s detail capture capability.

It’s safe to assume, for example, that a two-megapixel camera will snap more detailed photos than a one-megapixel model.

However, do bear in mind that resolution is just one of many factors that affect the overall appearance and quality of a digital photograph, the quality of the lens being the most significant.

You should also be aware that the resolution of the photos captured by a digital camera might well be at odds with the megapixel-multiple specification quoted by the camera manufacturer.

For instance, the maximum resolution of a so-called two-megapixel camera might actually be 1,600 x 1,200, which is some 80,000 pixels short of the two million mark.

This is because the sensors positioned on the extreme edges of the CCD, the electronic device that captures individual pixels, will probably be unusable.

There are various reasons for this. It could be that the borderline pixels are obscured by holding clasps or because the camera’s own lens design affords an insufficient passage for light to reach the CCD’s boundaries.

MISCELLANEOUS

Micron
A micron (mn) is one millionth of a metre but, if you’ve never heard of it, don’t worry – you don’t need to. In the computer world though, the micron is of prime importance to processor manufacturers, such as Intel and AMD, because it measures the scale on which they build their chips.

Intel, for example, has long been able to produce chips that contain components measuring just 0.13 microns across, although you’d need to be in possession of an exceptionally powerful microscope to confirm this.

Of course, the development of smaller components means the production of processors with more components crammed onto their silicon, which results in more powerful processors.

In fact, the micron is not long for the computer world. Intel recently announced the introduction of an even finer process technology that makes it possible for components smaller than 0.13mn to be produced.

Rather than fractionising a fraction, the firm has instead turned to the nanometre (nm) as its unit measurement, i.e. one billionth of a metre.

3.5in & 5.25in drives
Desktop computer cases generally include a number of free spaces for additional drives or devices. Known as ‘drive bays’, they come in two sizes: 3.5in and 5.25in.

This measurement is taken widthways – the height measurement is more or less redundant. Most PCs come with a hard disk, CD-Rom and/or a floppy drive fitted into some of these bays and any remaining vacant bays can be used for further drives or devices.

CD-Rom, CD-RW and DVD-Rom drives always fit into a 5.25in bay, hard disks fit into 3.5in ones.

0 comments:

Post a Comment