At this point you might say ‘Why should I care?’, after all Moore’s law – the one about the number of components that could be crammed into a computer chip doubling every year – and as a result making our computers faster and more capable, has been incredibly empowering for the making of our computing devices.
The ‘law’ was first mooted in 1965 by Gordon Moore, who by the way is one of Intel’s [the chip makers] founders. He later changed that statement to ‘every two years’, and we have seen it becoming a self-fulfilling prophesy.
The first chip released by Intel was in 1974 and contained the electronic circuits necessary to do calculations in a tiny package, which at the time was regarded as a marvel. It had 2300 tiny transistors embedded and each the tiny size of a red blood cell. The transistors act as switches with each capable of an ‘on’ or ‘off’ state or 1 and 0 in binary code.
In 2015 Intel came up with their ‘Skylake’ chip and it is thought to contain somewhere between 1.5 billion and 2 billion transistors, the size of these transistors is so small that you literally can’t see them without special microscopes and they are in the order of magnitude smaller than the wavelengths of light that we mere humans use to see, i.e. literally invisible.
It is hard to compare how the technology has improved from that first chip to now, but if the speed of a motor vehicle had improved at the same rate then the 2015 car would have a top speed of over 670 million km/hr and by now will be twice that speed. An extraordinary speed of progress.
However, there are physical reasons why such progress may be nearing an end, it has to do with the amount of space available and the size of components needing to be fitted to that room. You might almost say there is a housing shortage. We have enjoyed the diminishing sizes of our computing devices thanks to the ever smaller, faster and ever more capable chips, but now the makers would have to find a way of shrinking components to smaller than the atom of hydrogen, and that, as far as anyone knows, is impossible.
The cost is of course also as significant barrier and it may be that that alone kills off Moore’s law, so what is the future? It is thought that by 2020 we’ll have reached the end of miniaturization, so better software or better programming may help, or specialized chips just for certain tasks. But the regular and fast improvements for computing that we have become used to will in future need something more radical.
This is an area chipmakers are working hard on and one of the ideas is to stack the components in layers upon one another rather than cramming more in to the flat chips we are familiar with. Samsung among others already sell chips with stacked components, suitable no doubt for much of the coming ‘Internet of Things’, but will it do for say card-thin smartphones or is it only for new large super computers. These chips, so-called 3D chips, could, says IBM, shrink a supercomputer that presently fills a building to the size of a shoebox.
That may sound fine, but apparently, there is a problem with heat, for when the surface area for heat removal becomes much less than the area producing the heat then functioning stops. Liquid cooling may be the answer, but you can see it is not straight forward and quantum computers require cooling to -273 degrees C.
So what else do they have up their sleeve, well, there is quantum computing using counterintuitive rules of quantum mechanics to build machines that can solve certain types of mathematical problems far faster than any ordinary computer. Cracking cryptographic codes is a famous application, but it is its ability to simulate quantum subtleties of chemistry that conventional computers find almost completely intractable that is their most important use.
Expect prospective improvements for our everyday computing devices to be limited partly because of the demise of Moore’s law and the wind down in microprocessor improvements and partly as a result of gradual moves to using cloud computing away from having the computing power in your own machine to using big ‘cloud’ based super computing power unavailable to the individual.
Meanwhile there is lots of room to improve User Interfaces like keyboards, screens, gaze tracking, gestures, voice response. Keyboards haven’t changed materially since the mechanical typewriter, and the mouse was first presented first in 1968, touchscreen was pioneered in 1970s.
Siri and Cortana will leave our computing device and become omnipresent as AI and IoT plus cloud computing will allow virtually any device to be controlled simply by talking to it. And I note that Samsung now makes voice controlled TV and Google is working on electronic contact lenses to do what their Glasses headset didn’t quite managed to do.
Look forward to the tiny chips being embedded in everything from the fridge to your car and you interacting with them by speaking to them, a world more monitored and perhaps more comprehensible world than ever before.
For now, enjoy your computer or computing device and just think how far we have come in the last 50 years.
You must be logged in to post a comment.