Ryan’s ability to architect and build backend systems which make the IoT work for real-time engagement while leading the company’s technical vision brought him to his current position of CTO. As the VP of Engineering, his expertise in the SaaS industry guided the company’s transition into an edge computing solution.Previous experience includes working closely with industry experts in the litigation world to create advanced data visualizations. As a trial consultant, he worked closely with experts in the medical, transportation, oil & gas, and technology/structural engineering industries, Ryan helped attorneys win verdicts in the hundreds of millions of dollars. He did this while using many of the same tools he created to automate the visualization of large amounts of data in real time, which is where he got his start with Phizzle in 2013.View the profile
About the talk
Moving computing workloads to the edge can have surprising benefits that lead to entirely new ways of interacting with data. Take one classic edge exercise: bringing millions of time series records per second into a time series database from varied IoT data sources. High-performance time series databases require data to be inserted in order. Geographically decentralized networks then lead to significant delays in delivering data to the database. This delay is measured not just in milliseconds but seconds. The larger this window, the more data must be sorted before insertion into the database. This problem of windowing and sorting at the edge is a simple and powerful example of the true value of localized performance computing.
Centralized Isn’t Streamlined: Exponential Costs Derived From Low-Performing Workflows
Consider a use case involving 1 million records per second which must be sorted into a single time series database. The maximum measured delay for receiving a record would be 5 seconds, requiring a rolling window of 5 million records for time-based sorting.
Even using the best sorting algorithms and parsing records on other threads, mathematically this can’t be handled by a single thread. A thread may be able to support 1 million records per second but we now need 5 threads at minimum to keep this data in order if each individual node can only prepare 1 million records for insertion into the database every 5 seconds. Sure, when the parsing overhead is minimal the database can keep up this & will run on most 250w servers. But if we increase these numbers by 10x, our system requirements go up more than that. For now we will assume an optimistic 14x increase in time to sort 10x the data.
Now we need not 50 threads but closer to 70 threads. When accounting for overhead, this is clearly unmanageable on a single machine. The attached graphics depict this relatively universal algebraic computing problem existing at the edge. A cluster of powerful machines will now be required to sort your data and latency is well over 70 seconds before we even begin inserting data into the database. At this point, a 10x increase in computing complexity has led to more than a 14x increase in cost. If you scale this up 100x, the costs go up by more than 190x.
Scaling past 100x is where things get even worse.
Figure 1: Costs of Central vs. Edge at Scale
To conclude, imagine we have 8 edge nodes collecting 125k of unique data each. The per thread performance is only 17% of a server CPU but now has 1/8th the data per window, requiring 22 edge threads. By increasing the load 10x, we will need 10x the threads.
At 1 million records/sec the edge is 2.5x cheaper. At 10 million records/sec the edge is 7.1x cheaper. Furthermore, the price performance and power performance of the edge core is about double.
We now have a system that is 7.1x less cost using 6x less power, and 12x less latency. And the difference only increases with higher scale…
04:10 Scaled sorting scenario
06:23 How the edge changes the game
10:10 The IoT edge problem
12:38 The difference in chips
21:52 Constellation networks
23:55 Experiences and use cases
Alright, thank you. So today we're to be talking about you're going to be hearing a presentation all about the Edge. So why is The Edge a good place to do business, business location, location, location power of the Edge out as an article, and then we're kind of turn this into a talk. So there's a math part of this and it's going to be a bit heavier, we're going to talk about some trends in transit on how you know, what might drive more and more work clothes out to the Edge. We're going to talk about some efficiency reasons. You may not have considered
in the past. We've all heard of Edge Computing before this is not a new concept. It's not a new market. It's a growing market, but it's not exactly so of course, we'll touch on some of the network changes that are happening in the industry right now that are going to drive some of this change as well. So Look at this rate practically because we are going to talk about future things. But I want to make sure that we circle back and talk about what can we do right now? What are the opportunities that
are right in front of us so that we may be missing so I can start out. I think it's good to acknowledge that has driven the internet for a very long time. So we have had your clients. I mean everything comes into the center and so that's been around for a very long time. Really the center has been on prep for a long time Cloud kind of took a lot of that over and so get your Hub and spoke and how to become multiple hubs in multiple spokes, but you're still bringing everything into a centralized location and Ryland talk about on Premarin the
cloud wouldn't you say centralized, you know and centralized can mean, you know, maybe a few Regional, you know data centers, but it's still a large Data Center. So when we talked about his computer, we're not talking about offering in a large city center. So if you really look at these at these
Buy this talk
Access to all the recordings of the event
Buy this video
With ConferenceCast.tv, you get access to our library of the world's best conference talks.