Fastly may have set up a team to run its Asia-Pacific (APAC) business just under a year ago, but the company’s founder Artur Bergman believes the content delivery network (CDN) supplier’s developer roots will give it an edge over rivals.
Like other CDN players, Fastly has branched out to offer edge compute and security services to help enterprises speed up and run applications at edge locations while fending off cyber threats, including distributed denial of service (DDoS) attacks.
But it thinks differently in those investments, focusing on efforts to scale its points of presence (POPs) both horizontally and vertically, preventing data leakages and supporting the WebAssembly language that brings new capabilities and additional security features to cloud-native developers.
In an interview with Computer Weekly in Singapore, Bergman, who is also Fastly’s chief architect, discusses the company’s competitive play, its investments in APAC and new capabilities that customers can expect next year.
Tell us more about Fastly, its focus on developers and how it is competing against Cloudflare, Akamai and others in the market.
Bergman: We founded the company in 2011, but our story goes back to 2008 when I was CTO of a company called Wikia, which is now Fandom. We built wikis and we wanted our wikis to run fast around the world.
The content of wikis doesn’t change very often, but when a user edits a wiki, the content has to change immediately. We wanted to cache the content at the edge, but none of the providers could guarantee fast cache invalidation. So, we ended up building our own small CDN.
In 2011, I left Wikia to start Fastly and Wikia became our anchor customer. The idea was that we should be flexible enough so that no one should feel the need to build their own CDN. We should provide enough control to our customers so that they can treat the edge as their own.
From day one, we also had this dream of delivering full compute at the edge, but it took us 10 years to get to a point where we had the network and the technology to be able to deliver a solid compute environment at the edge, which we did in 2021.
We were developers but our competitors were not developer friendly at all – and they still kind of aren’t. We wanted to integrate the edge with our application to get the maximum benefit. We also focused on small files. Back then, CDNs were either optimised for a few files or large videos, but not billions of small files like user-generated content. We also went into API [application programming interface] caching, which was extremely important because most of those were small files. We built a different architecture to enable that.
Artur Bergman, Fastly
Our first customers were high-tech customers, but we soon expanded our reach to publishers who produced segmented or small videos that worked well with our architecture, notably through request collapsing, a kind of global application level multicast. So, regardless of the number of requests you get for a new object, we only go back to the origin once. We also looked at e-commerce which drove us to invest more in security. That led to the Signal Sciences acquisition to beef up our security story.
What about the larger enterprise space now that you’ve got a security portfolio in the fold? What sorts of workloads are enterprises running on your platform?
Bergman: We’ve been in the enterprise space for a long time, and most of our revenue comes from enterprise customers. It’s been the case for the past seven years.
Enterprises use us for communications between their end-users or devices and central systems. End-users can be people, but also IoT [internet of things] devices that make API calls. The workloads are diverse in what they’re processing, but it’s fundamentally data that’s going from the cloud or datacentres to devices and end-users.
Besides caching and serving images, we also host dynamic content which our customers update using our APIs. In the e-commerce space, this could be caching information on pricing and availability, so when the price is changed, we’ll update the cache.
That means your application can be fast around the world. It doesn’t matter where your datacentre is. The data relevant to the users in a region will be cached in that region so you get as many interactions as possible, and you don’t have to go back to the datacentre.
From a security point of view, we took a different path. The worst thing our customers can do to themselves is have a bug that leaks data between users. To prevent that, we spin up a sandbox per request and when the request is over, we tear the sandbox down.
We also ended up investing in the WebAssembly language. We started the Bytecode Alliance for server side WebAssembly, but we also wanted to support multiple languages as enterprises have a list of languages they use. That means we’ve had to reduce the cold startup time. Some of our competitors’ cold startup time is 500ms to one second but ours is 50 microseconds so customers don’t have to worry about memory safety and the memory guarantee is part of that.
How is Fastly helping customers to mitigate DDoS attacks and doing things differently when it comes to DDoS mitigation?
Bergman: We do a fair amount of DDoS mitigation, and we use our network to our advantage. We have powerful machines that can handle large amounts of inbound traffic, and we have multiple levels where we try to discard or queue bad traffic. Customers can also tap WAF [web application firewall] functionalities for higher level application protection.
And so, there are two levels of protection – the first is when there’s so much traffic that we’re fending off in volumetric attacks and the second is when you’re trying to defend against, say, a denial of inventory attack, which has less traffic, so it doesn’t hit the lower level. We have an edge rate limiter that customers can use to protect against those attacks. We also have managed security services with a security operations centre that identifies new threats and tracks what’s going on to protect our customers.
Artur Bergman, Fastly
How is Fastly building up its infrastructure footprint in terms of its POPs across the world? Could you talk your strategy in that regard?
Bergman: We have a different POP strategy. Having more POPs is not always better because of the caching ratio and massive data. We also want our POPs to scale horizontally like everyone else, but we also want our machines to scale vertically.
Today, we have 99 POPs across 35 countries and our strategy has been to combine where we think our customers want us to be and where we should be because we have a lot of traffic in a certain location. We’ve had a POP in Singapore for a long time, but it primarily serves customers outside Singapore. We also built POPs in Tokyo and Hong Kong pretty early, as well as Australia and New Zealand.
When we enter a market, we decide – depending on the opportunity and the traffic – how many POPs we should have and where to place them. We’ve tried to choose the most connected buildings and locations. The POPs themselves are designed differently – we have our own SDN [software defined networking] controller. We use Arista switches that run our own software and it’s a very flat architecture.
And then we have really powerful servers. The current generation servers each have 400Gbps NICs [network interface cards], a terabyte of RAM, 50TB or so of SSDs [solid state drives] and 64 cores. Our machines can push a lot of traffic so it’s really useful in cases of traffic spikes or DDoS attacks. Each individual machine has all the capacity, but we also have a lot of storage – some of our biggest POP machines might cache 300 million objects.
Talk to us about your business in the Asia-Pacific region and where you see opportunities.
Bergman: We’ve had POPs in the region but until recently, we didn’t have a team here. We were planning to start a team and invest in this region at the end of 2019, then Covid-19 hit so it was impossible to expand to new markets. The team here was started less than year ago and we now have seven people in various functions including technical operations, customer support and engineering. We’ve also added POPs in nine new locations in APAC in the past year and a half, with significant infrastructure upgrades in Australia and Japan.
From my conversations with potential customers, APAC is a highly connected market and as customers try to be more global, it’s important that their applications can scale. We also see a lot of opportunities in countries such as Indonesia or Malaysia.
You mentioned some of the security capabilities that Fastly had acquired. What is the thinking around evolving the platform? What sorts of new capabilities are you looking at?
Bergman: We want to be an amazing platform for people to build applications and innovate on the edge. A lot of investments are now going into the compute platform and supporting more languages and databases. We have our global key value store, and we want to integrate with more data stores.
We also launched our observability capabilities at the end of last year. We have a privileged view of the internet, and we see so much of the traffic. In the past, customers use Fastly to see what’s happening and how they’re doing but they had to do all the work like pull together log files and build their own dashboards. With observability, we are extending that by collecting more data and giving our users visibility over what’s happening on their network.
With regard to security, we believe our products have to enable developers and product engineering teams to deliver features to end-users, and modern security organisations see themselves as neighbours to allow developers to move faster. And so, we see the security components of our platform as something that enables companies to get stuff done.
In November 2022, we also announced a project that lets you spin up Fastly on a cloud provider or in your on-premise datacentre, giving you access to our security, compute, observability and caching features no matter what you need them. There’s been a lot of interest from customers and that’s coming next year.