You are viewing a single comment's thread from:

RE: Inference Compute: More Important Than Neural Networks

in LeoFinance10 months ago

That article doesn't tackle the huge bandwidth problem here. Will Tesla vehicles all have SIM cards and be uploading data while driving around? Will they instead upload data at the owner's residence using their data plans? Until I see an actual plan for the bandwidth issue, I can't help but think this idea only exists to boost the stock price and nothing more.

Sort:  

All Tesla's come with SIM cards so the question is connectivity. Some opt not to connect hence go through WiFi. If they connect, it is $9.99 per month which gives the ability to get over the air updates and infotainment package while moving.

As for the inference, it is dependent upon the hardware if would imagine. each generation of hardware can handle more. The size of hardware 5 will likely allow it to be moving, even on FSD, and handle inference. I am not sure of the potential of it on the earlier versions. It might have to be done when the vehicle is not being utilized.

I totally understand that the hardware might be sufficient for inference... but the way that I understand it is that the compute power is potentially restricted by the amount of data that can be uploaded and downloaded to those computers. If data can't be transferred quickly then the compute power of each device becomes far less important (as I understand it).

You can imagine that Amazon, xAI, would have incredible data connections - whereas Tesla might be relying on SIM cards and suboptimal WiFi connections.