I'm surprised that the PS4 does interlaced. It was pretty featureless upon release but it looks like they're responding to requests - maybe it wont be long before it can work as a decent media server like the PS3.
As for the Xbox One, Microsoft kinda dug themselves into a hole with regard to every part of their implementation. In order to get it to function like a tuner/media box they use a separate rendering layer, essentially doing double display only with overlay. There are three resolution independent layers total, though I think the third one is limited somehow, can't recall right now. Anyway they're pretty much fucked because the conversion to interlaced would have to occur after the compositing of the layers, introducing latency and inevitably eating up either GPU or CPU resources. And aside from the 40mb on-chip fast RAM that's typically used as a frame buffer there simply isn't much bandwidth available for other tricks. Not to mention the whole kinect fiasco. On the 360 the kinect necessarily introduced latency as it was a high bandwidth, high power USB accessory, so they attempted to bundle as much of the Xbox One kinect processing in the main GPU/CPU as possible and not rely on a slow outside protocol.
Sony has greater flexibility with their setup. For one thing some basic system functions can be offloaded to an adjacent ARM processor, though right now I think they only use it during power-save mode. Sony also uses a lightweight barebones OS, they're only using a fraction of the allocated 3gb RAM. If, god forbid, the sony eye thing catches on and they come up with a new bandwidth-intensive generation of the tech, they'll still have room to work with and would likely offload a lot of the legwork to the hardware embedded in the camera.
On the interlaced thing - yeah, Microsoft could probably do it if they expend the resources to reconfigure a few things, but is it worth it for them? After all they can't even be arsed to consider backward compatibility.