Contents

PaperSpace and Future Hyper-Converged Computing (Link)

Contents

Through an old article in my RSS, I came across this very long twitter post. I found it worth sharing as it expands one of the directions tech is moving in, rapidly. Will we all welcome it? Will it blend well with FOSS? Will it run EMACS or VIM?!?

/2020-06-08-paperspace/Noah.jpg

Source Twitter Post

Copied in full below for easier reading


Paperspace and Rollapp have my head spinning. It’s so clear this is the future and so clear hardly anyone sees it.

Enterprise has an emerging model called Hyper-Converged Infrastructure (HCI) which is basically: the compute, the storage, the display, the delivery, and even the software don’t exist until you need them.

What that means is: given a huge data center, you can choose from whatever resources are available at the moment and run a streaming software experience to the end user by connecting them all on-the-fly.

So a processor in Denver and storage in Albuquerque can simulate a single app experience in Sheboygan. The hypervisor makes sure it’s all seamless by containerizing everything.

So it’s on-the-fly software legos. Currently, software expecting to run on a desktop is virtualized and runs in this world without knowing it.

The next step is, now we can write software than anticipates this environment.

When you think of Notion or Airtable, they are basically interfaces to a database. Data sits in blocks (chunks of markdown and JSON) and has so structure until it’s displayed. Almost like how a 3D game – there are granular resources which are rendered at run-time.

Also like a 3D game, if something is off-screen, it isn’t rendered. If something is in a different database, it isn’t accessed. So the app itself can be extremely lightweight and delivered in a ton of formats.

A table can quickly become a spreadsheet, or a kanban, or a wiki article. It’s just a different way of displaying the blocks.

In a hyperconverged world, the blocks aren’t even in the same database. They could be anywhere. They could be anything. They could have a CPU assigned to them– each cell in a spreadsheet running in a different data center.

This used to be insanely difficult, but now it’s not even expensive. Paperspace costs about 1/10th of a penny per minute, and that’s -marked up-.

By comparison, a MacBook Air costs about 6x as much, if you use it 8 hours a day, every day.

So now you can assign an entire computer to a single cell in a spreadsheet, and assign another computer to unify all of the calculations happening across the cells.

But no one is designing software like this currently.

It’s not hard to think of a different computer handling part of a task – xBox One supposedly had cloud rendering, but it didn’t do much.

Google Stadia runs games inside a hyper converged environment, but they’re all currently games designed to expect to be run on a single desktop.

The next wave in personal computing is going to come when we have 5G, because then we’ll know we can expect there to be fiber-speed connectivity under all conditions.

Then we’ll know that we can have any part of an application rendered anywhere. Instead of embedding gifs in a tweet, you could embed a whole game. And it’ll take zero time to load.

Any machine learning application can have unlimited processing power to computer, because the results will be streamed from special purpose chips in the cloud.

Suddenly carrying around the fastest available processor in your pocket, only to have it be powered down 90% of the day, will seem obsurd.

Rather than push 5nm CPUs to the edge devices, it’ll make way more sense to fill semi trailers with them and leave them every few miles parked next to a fiber line and a power pole.

“Computers” will become just screens. There will be nothing to upgrade, except the modem. Your “desktop” will be a Chromecast-sized puck with a 5g SIM card.

Who will create the apps for this new future? No-code will be the only way to build, with entire apps abstracted into a single drag and drop bundle of code; the complexity of building apps will be so abstracted, developers will become simple assemblers.

All software will be manipulatable, and your OS will be a semi-intelligent assembler of these blocks – Siri & shortcuts is a lot closer to the future than Xcode.

This isn’t even the future - this is all happening right now in enterprise. It’s a dam waiting to burst in the consumer space.

Right now we think of Google for search and g suite, but Stadia + Collab are their path to the future.

Stadia just exists to get people used to the idea; It’s a chrome cast with a controller. Soon they’ll do it via a Chromebook shape and no one will even realize.

What place does Microsoft hold in this future? They’ll provide the hypervisor, the IDE, and the compute, via Azure. You won’t know you’re buying from them but they’ll power everything you do.

Amazon will obviously benefit – most block-based software will un on AWS, whether anyone knows it as a consumer or not.

Facebook is already running in this mode internally, and their XR future can be seen in projects from FAIR – Detectron 2 in particular – where they’re looking into ways to hook block-based computer to real-world objects.

Apple is the wildcard here. They’ve never quite earned a reputation as being good at cloud compute. But they definitely have a reputation of seeing the future first – and creating the defining interface toward it.

People think of XR headsets as the next interface of computing, but more importantly it seems XR will be the stepping stone to a block-based software future. The glasses are just the gimmick to get you to stop expecting a glowing rectangle.

In Apple’s view of the future, computer will be tied to objects, but in a personal way – all personally identifiable compute will happen, encrypted, in your pocket, before an abstracted agent reaches out to a data center for more resources

In exactly the same way they anonymously follow drivers for just a single block (but never more than a block) for mapping, compute will happen in the same way. Each chunk of HCI processing will be encrypted, anonymous, unknown, except when displayed.

In the same way Catalyst brought small-screen compute to the desktop, eventually there’ll be a path for block-based to get to phones, from the glasses.

Less interesting to think about where this will all go – it’s clear and inevitable – but what are the PATHS to get there and how can we each contribute along the way? What can you build today to anticipate this tomorrow?