This sounds like a rather trivial calculation, but rather than actually thinking about I just reached for a linear algrebra book - no luck.... Then for a calculus book ... still no luck, although I had some fun with the Taylor series.
A quick google search later and I had a few results, most were closest point on two rays and not exactly what I wanted. Some others failed, or deliberately chose not to, note that a ray is defined as r = P0 + td where (and here's the important bit) t is element of R AND t>=0 . Which would fail to account for the closest point being at a -t value. (invalid)
So after being let down by all these sources I had to engage brain and work it out myself - sometimes I'm really lazy in the evenings :)
Here is the solution I came up with - hopefully it helps someone else:
if Pc is closest point on the ray to the point at Px then the closest point will be when (Pc-Px).(Px-P0) = 0
I am finding the point on the ray where the ray and the constructed vector from the point to the ray are at right angles (closest point)
We know with the ray equation: Pc = P0 + td for t>=0 and let us call vector (Px-P0) "s" to simplify our notation. We then get:
(P0-PX+td).(PX-P0) = 0 becomes
(-s+td).s = 0 and using properties of dot product this gives us:
t(s.d)-(s.s) = 0
Which rather elegantly gives us:
t = (s.s)/(s.d)
Which is pretty quick to calculate on a computer as it is merely two dot products and a scalar division.
We do need to be careful of getting a negative t value as this is a projection onto the non-existant part of the ray. If this is the case the closest point will be the ray origin.
<edit> I have just noticed there is an error in this - comes from doing maths late at night. The right angle is on the incorrect vector - d'oh. Should be: (Pc-Px).(Pc-P0) = 0