Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If bandwidth were the main limiting factor, they would just take video intermittently and downlink it over a long period of time. This would be useful for documenting real-time changes on the ground during important isolated events (e.g., rocket launches). But they apparently don't have this capability, and I think it's because the hardware for taking many frames per second just isn't worth the cost and complexity.


As mentioned in parent, the rate you can capture depends on the resolution (i.e. image size in bytes), downlink speed, available memory, and length of the event.

As the length of the event approaches infinity, the framerate you can capture is basically bandwidth / image size. `(mb/s)/mb = 1/s` (aka hz). Beyond that limit, the memory size / image size gives you a hard limit for the number of additional frames you can buffer. To get a constant frame rate, you want to spread that quantity over the duration of the event.


The playback speed can also be adjusted by changing the frame rate of playback. Each frame just stays on the screen for a longer time. Similar to the -delay <n> parameter when creating a GIF in ImageMagick.


That's not in question. Yes, playback framerate is independent from recording framerate.

The question is the availability of resources (memory and bandwith) to the satellite and how many frames it can store without needing to transmit them over its slow downlink.


The frames are already back on earth now, as evidenced by the video being on Youtube. Adjusting the playback rate of those frames is what I'm talking about...


I don't think memory is limiting design factor either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: