To me its sounds like the problem is more one of computation.

If we make a sensor that can count and distinguish more photons per pixel and have enough computing behind it the images could be rendered with enough detail including exoplanet surfaces.

The problem of the start light blurring the image is not unlike trying to view a 4k image file on a commodore 64 hardware.

If we make a sensor that can count and distinguish more photons per pixel and have enough computing behind it the images could be rendered with enough detail including exoplanet surfaces.


That's not the case. You cannot beat the diffraction limit, current telescopes are a few orders of magnitude too small to possibly resolve the surface of an exoplanet. No amount of post-processing will overcome that.

Even when not limited by the diffraction limit post-processing can only make rather modest improvements to the sharpness of an image. In the case of exoplanets the light from the host star and planet need to be separated before detection, because even if you can subtract the much brighter star in post-processing it will leave behind irreducible noise.

That's a great explanation @IMP-9. The only part I think you didn't make clear is that the diffraction limit based on the size of the aperture of the telescope convolves the bright star's light with the planet's light making it harder to eliminate the starlight before the imaging plane of the sensor.

Also I spotted another flaw in @TopCat's reasoning. The actual criterion is not more photons per pixel; one can achieve that quite simply by increasing the pixel size, but of course that reduces the resolution of the imaging instrument. The necessary criterion is to match the pixel size with the telescope's inherent resolution based on the diffraction limit. If one goes above the limit, one sacrifices resolution; if one goes below the limit, the light is spread across more pixels and one sacrifices contrast.

Great comments guys... thanks.

No how about getting rid of the lens and what not and just make a sensor that can capture every photon individually and measure the properties of each photon. Then arguably then we can simply calculate where the photon should be from and reconstruct the image photon by photon.

It may be that the sensor is the size of the moon and has to point towards the object for a few years to get enough data points. but given enough computing and storage power it may not be impossible.

@TopCat, the size of the pixel on the sky is the question here; you must make that as small as possible, and for that you need magnification. I'll stick to mirror telescopes describing this rather than refractors, since I work with one, but the principle is the same either way.

Regarding the sensors, the pixel sizes are limited by the physics of photolithography; that controls how close together we can make the cells, and thus how small. In fact, since there's more to it than just a bunch of cells, for example connecting them together so you can read them out, they have to be considerably bigger than the diffraction limit in the photolitho process. So right there you're limited in how small an area you can image.

To get an image of something as small and far away as a planet orbiting another star, you have to apply magnification, and a lot of it. That's what the telescope is for. Now, you can make the image scale small enough to see this.
[contd]

[contd]
I explained the next part in my other post, but here's another way of looking at it:

Now, to make the most of the magnification, you want to make your cells small; but there's no point in making them smaller than the diffraction limit of the telescope, because then you won't be getting as much light on each pixel as you could, thus limiting your ability to see dim objects.

On the other hand, you don't want the cells to be too big, because then you're wasting magnification, by combining images the telescope can resolve onto the same pixel.

Remember that for a single cell, you get one value for the whole cell; ideally, this value is the diffraction limit of the telescope, so you get the most light you can in each cell for the smallest image the telescope can resolve.

Everything I'm talking about so far is before we start blocking the star's light so we can see the much dimmer planet, and before we start image processing.
[contd]

[contd]
To get an image that resolves the planet from the star, you need to have one pixel that sees the star alone, and one pixel that sees the planet alone, and one dark pixel between them. But at or near the diffraction limit, another effect becomes important: the light from the star "bleeds over" into adjacent pixels. This happens both as a result of the diffraction rings around the star's image, and as a result of electrons freed by the charge in one cell jumping to an adjacent cell. This can happen both as a result of quantum tunneling and as a result of the insulation between cells not being perfect.

Thus, by blocking out the light of the star, without affecting the light of the planet, you can get an image of the planet that won't be washed out by the light of the star. You can only do this in the optics because of the leakage between cells.
[contd]

[contd]
@IMP-9's point is that the telescopes we can practically make are orders of magnitude too small to resolve details on the planet. The best we can do is get a spectrum of the light reflected from the planet.

And that's not all; that spectrum will be from light from the star, traversing the planet's atmosphere twice, with a reflection off the surface between, so we have to account for the star's spectral lines before we can understand what part of the spectrum is from the planet's atmosphere, and from reflection from the planet's surface. To get this, the general idea is that we "subtract" the spectrum of the star from the spectrum of the planet, looking for spectral lines that aren't present in the star's spectrum.

So we still need the light from the star, but we also have to have it in a separate image. This is what the technique described in this paper accomplishes.
[contd]

[contd]
To make @IMP's point more forceful, essentially this means that even with the smallest cell size and largest magnification we can get practically, the planet will be only on a single or a few pixels. We can get more data from a spectrogram of it than from staring at a handful of pixels or a single one, because at the highest practical magnification (controlled by the smallest possible diffraction ring, which is a function of the aperture of the telescope), we can't see details that are smaller than the above two pixels separated by a different pixel, but with spectroscopy we can tell things about the composition of the planet's atmosphere and surface.

No amount of image processing can find that dark pixel between two bright ones (or bright pixel between two dark ones, as the case may be) if it doesn't exist. That's the resolution limitation here.

I hope that helps you understand the reasons for these limitations. Please ask any questions you might have.

And that, of course, makes your question a good one; it's not simple at all. So 5s for you!

Thank you for the excellent explanation. I now understand perfectly why it is not possible given the current technology and physical limits of using light.

@Da Schneib Do you think optical wavelength interferometry can have a role to play in exo-planet observations?

Would an array of space based telescopes be a feasible way of overcoming some of the problems of ground based optical interferometry?

@434a, eventually, yes, optical interferometry will play a part, and as you foresee, once we start putting enough telescopes in orbit or on airless moons and asteroids we'll get more and better results faster than we can with ground based instruments. Eventually we'll launch or build telescope arrays that span orbits of planets in our solar system, which will give us the distance to make interferometers with equivalent apertures tens of millions of miles across. This will give us the resolution to image exo-planets at a usable image scale, likely well before we are ready to launch expeditions to them.

As computer processing becomes less and less expensive, using such large interferometers will require less and less time for data analysis, and having watched CCDs take over from film over what seems to me but a few scant decades, I expect the confluence of these trends to continue.

@IMP-9
Even when not limited by the diffraction limit post-processing can only make rather modest improvements to the sharpness of an image. In the case of exoplanets the light from the host star and planet need to be separated before detection, because even if you can subtract the much brighter star in post-processing it will leave behind irreducible noise.
Thanks, mate, for unwittingly confirming me correct when I pointed out that photons from far-distant sources cannot reliably be discerned (from in-line-of-sight photons from intervening sources; and gravity-redirected photons from 'side-sources') whose radiation has been put into trajectories that coincide at our detectors which 'build up' an 'image' from *individual photons accumulates" over long exposure/collection times. See, @IMP-9, how much worse (for 'imaging' extreme distance sources) would be the same problems you just admitted bedevil even nearby 'photonic image' contamination/overwhelming' situations? :)