"Almost all computer chips use two types of transistors: one called p-type, for positive, and one called n-type, for negative."

This seems quite an inaccurate headline. Or at least misleading.

Actually this article generally seems to have a really poor way of explaining transistors. Also leaving out of a lot of pertinent detail. Or is it just me?

@neutrino64: The article is referring to CMOS transistors, in which a p-channel MOS transistor is paired with an n-channel MOS transistor. This is done so that essentially no power is drawn except when switching.

The article does give a brief summary about how a transistor works, but the part about the gate controlling the 'switch' was put with the tri-gate discussion rather than with the discussion on the switch controlling the current flow.

The article is short on details, but then it is a summary of a conference paper and not the paper itself. This fits with Phys.org being a summary of science news, rather than diving into the details of any one area.

*please someone removes this*

@RealScience:

Fair enough. I guess in my opinion it's better to either get into a bit more accuracy and details or give more of an over-view explanation.

This article just seemed to try to shoot somewhere in the middle, not addressing the needs of either the general public or people with personal interest in the subject. But I understand it's a difficult line to walk. Either way, interesting stuff.

Fair enough. I guess in my opinion it's better to either get into a bit more accuracy and details or give more of an over-view explanation.

I think physorg tries to keep its articles within a certain word range generally, which allows it to be good for a quick reading, and keeps it from becoming an interminable dissertation.

It's only meant to scratch the surface - and while often it feels too rushed, I can always go to wikipedia or another relevant site if I am left confused or wish to go more in depth.

Fair point. Anyway, don't want to derail the conversation from the topic. Appreciate the feedback.

I think physorg tries to keep its articles within a certain word range generally, which allows it to be good for a quick reading, and keeps it from becoming an interminable dissertation.
It just reprints the university news. You're barking at the wrong tree - you should write a letter to MIT Media Relations staff, who is responsible for superficial reporting about MIT research.

IMO this article is rather good, it just lack some illustrations, which makes it less comprehensible - like the trigate design, the more illustrative explanation of the mechanism of stress to charge carrier mobility and so on. The principle is the same like with electrons inside of peeled-off graphene layer: due the mutual stress the electrons are squeezed each other, their repulsive forces overlap and compensate mutually, so that the charge carriers are moving freely in ballistic mechanism, i.e. in similar way like the electrons within superconductors.

Interesting work! I wonder if the strain can be maintained at small processor geometries?

As the tri-gate gets smaller, the number of atoms that make up the length and width becomes smaller. At, say, 25 nm width this number would be app. 75 atoms. One can immagine that the strain varies with such a small number of participating atoms, making the strain large in some devices (high hole mobility) and smaller in others (lower hole mobility). Or is this not correct? At least with doping, the small number of atoms have been shown to increase the transistor-to-transistor variation.

The level of strain is rather limited with signal/noise ratio - as Esaki has found at the beginning of 60's the electrons within highly strained semiconductor structures tend to oscillate in high frequency noise. The strain increases the free path for carrier motion and at the moment, when it exceeds the PN structure size, nothing can keep the electrons there in defined state. Therefore the processing speed is always balanced with density of integration achievable. Currently is easier to achieve higher processing speed with higher integration density and parallelization of CPU cores, which is the reason why the CPU frequency stagnates during last ten years.

@valeria - It's not that your comment is incorrect, but I believe that you are getting rated so low because your comment is convoluted and unnecessarily complicated, as if you are trying to make yourself out to be smarter than everyone else.

Let me summarize the challenges in a simpler, more straightforward way.

Currently, increasing clock speed in chips creates too much signal noise that the silicon transistor cannot adequately manage beyond a certain point. When the noise becomes too high, of course the transistors do not switch as reliably (Creating errors), and create excess heat that only compounds compounds the problem.

I assume that the commenters on this site are wise enough to realize that multiple cores are a workaround, if the software can be parallelized, but are somewhat limited in how much utility and scalability they can provide for general consumer computing other than graphics applications.

..are getting rated so low because your comment is convoluted and unnecessarily complicated
I'm downvoted usually with lite account only, who is 1) dedicated voting troll and w2) who labels the posts mindlessly at personal, not factual basis. I just don't like logical steps in reasoning. So if you write "increasing clock speed in chips creates too much signal noise", 1) you're getting out of carrier mobility context of this article 2) you don't explain the actual nature of problem.

Of course the increasing of clocking speed increases noise/signal ratio - but this is just the problem, which the faster transistors should solve - or not? But just the slightly deeper analysis (which is probably too complex for superficial readers of PO though) demonstrates, this problem is principal and it cannot be removed with using of transistors with higher carrier mobility, because the technology already hit the Heisenberg's uncertainty principle.

The Heisenberg's uncertainty principle says, if we increase the speed of electrons, we'll decrease their localization accordingly - which just prohibits the further miniaturization of transistors. This is somewhat deeper problem than that "increasing clock speed in chips", which "creates too much signal noise" - don't you think?

I've no need to pretend to be smarter than than everyone else - I'm just focused to the actual problem. Which is something, which is difficult to expect from contemporary generation of mainstream physicists, who are required not only to solve problems, but to provide continuation of the job when this problem is getting solved.