![duplicacy erasure coding duplicacy erasure coding](https://blog.scaleway.com/content/images/2019/04/domain-panne-az.jpg)
Be sure to watch the code sample portion of this example of Intel ISA-L of erasure coding and the rest of the playlist. Removing thoughtful latency and, as a side effect, getting high throughput is one of the main goals of the Intel ISA-L.
Duplicacy erasure coding software#
That software latency compounds quite tremendously in those parallel systems. You want to minimize the latency that you can incur in software, as any given operation is split up and parallelized and touching thousands of systems. It is a key aspect for people who are building at larger scales. The flip side of the throughput is a latency, and especially latency. Just to give a sense of scale about the performance, looking at the example that I'm going to introduce in this playlist, we see roughly five gigabits per second per code of erasure coding calculations on that we used in the example, which is quite substantial. The reason for ISA-L to implement these is to enable the people to get these economies of storage media as we started looking towards to the solid-state transformations. The reason they weren't ubiquitously adopted prior to ISA-L is because they were computationally expensive. Just about anyone, enterprises or hyperscalers who are building systems above a few nodes, can start taking advantage.
![duplicacy erasure coding duplicacy erasure coding](https://image.slidesharecdn.com/howdoeserasurecodingprotectdata-170214181751/95/how-does-erasure-coding-protect-data-2-638.jpg)
If we can shrink the cost of providing those access guarantees-in this case, half of the cost in using erasure coding scheme as opposing to triple replication-then that presents giant savings from operating and capital expenditures. Essentially, erasure coding in general will continue to give access to that data even though there are failures. Say if one or two whole nodes disappear often at work, you still haven't lost access to the data. Triple replication is the process by which you know a single copy of data is mirrored in at least two other places. Many of the clouds are using erasure recording, especially people who build systems to scale to many nodes, more than 10 to 20.įor these systems, erasure codes make lots of sense because it gives you all of the same redundancy guarantees as triple replication but with half the raw data footprint, or potentially less, depending the way how you configure your erasure-coded system. But erasure coding is little more esoteric. In this video, we're going to talk about Intel® Intelligent Storage Acceleration Library erasure coding.Ī lot of people intuitively understand RAID.