Azure Functions: Serverless Without the Hype
Real-world Azure Functions patterns used in production, with practical examples and lessons learned the hard way.
Jean-Pierre Broeders
Freelance DevOps Engineer
Azure Functions: Serverless Without the Hype
Azure Functions has been running in production environments for years now. Not toy projects - actual workloads where downtime costs money. Here's what gets learned, usually after something breaks at 3 AM.
What Azure Functions actually is
It's Microsoft's serverless compute platform. Write code, push it to Azure, they run it for you. Pay only for what you use. Sounds amazing, right?
In practice, it means not managing VMs. No OS patches, no load balancer configs, no sleepless nights because a Kubernetes cluster decided to implode. For many use cases, that's a godsend.
Real example: Image resizing
A typical scenario: an e-commerce platform where product photos get uploaded, often 5-10MB each. Those need to be resized into thumbnails, medium, and large versions.
First version: monolithic API that handled uploads and resized images synchronously. Worked fine until the platform went viral on Instagram. The API collapsed under load.
Second version with Azure Functions:
[FunctionName("ResizeImage")]
public static async Task Run(
[BlobTrigger("uploads/{name}", Connection = "StorageConnection")] Stream imageStream,
string name,
[Blob("thumbnails/{name}", FileAccess.Write)] Stream thumbnailStream,
[Blob("medium/{name}", FileAccess.Write)] Stream mediumStream,
ILogger log)
{
log.LogInformation($"Processing image: {name}");
using var image = await Image.LoadAsync(imageStream);
// Thumbnail: 150x150
var thumbnail = image.Clone(x => x.Resize(new ResizeOptions
{
Size = new Size(150, 150),
Mode = ResizeMode.Crop
}));
await thumbnail.SaveAsync(thumbnailStream, new JpegEncoder { Quality = 85 });
// Medium: max 800px
var medium = image.Clone(x => x.Resize(new ResizeOptions
{
Size = new Size(800, 800),
Mode = ResizeMode.Max
}));
await medium.SaveAsync(mediumStream, new JpegEncoder { Quality = 90 });
}
Blob trigger means: when a file lands in the uploads container, this function starts automatically. Output gets written to thumbnails and medium containers.
Scaling? Automatic. If 1000 uploads hit at once, Azure just spawns more instances. Cost? Went from €200/month for a dedicated VM to €15/month average.
Things nobody tells you
Cold starts are real. If a function hasn't run in a while, the first request takes 2-5 seconds. For an API endpoint, that's unacceptable. Solutions:
- Premium plan (always warm, but expensive)
- Ping function every 5 minutes to keep it warm
- Accept that for async workloads it doesn't matter
Dependencies are tricky. Native libraries can't just be included. ImageSharp works because it's pure .NET. But try something with FFmpeg - that gets complicated fast.
Timeouts. Consumption plan: 10 minute max. After that, the function gets killed. Long-running jobs need to be planned differently, or use Durable Functions.
Local development setup
The Azure Functions Core Tools are... okay. Not great, but workable:
func init MyFunctionApp --dotnet
cd MyFunctionApp
func new --name ProcessOrder --template "Http trigger"
func start
Using local storage emulator? Azurite is the standard now:
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet"
}
}
For complex scenarios though, deploying to a dev environment in Azure can be more practical. Turnaround time is acceptable and it tests how things actually work in production.
Monitoring and debugging
Application Insights is not optional. Seriously. Without proper logging, you're flying blind.
log.LogMetric("ImageProcessingTime", stopwatch.ElapsedMilliseconds);
log.LogInformation("Processed {fileName} in {duration}ms", name, stopwatch.ElapsedMilliseconds);
Custom metrics cost nothing extra and give insight into performance patterns. Logging processing times, file sizes, error rates - when things break, data is needed to work with.
Deployment pipeline
GitHub Actions for CI/CD. Simple and reliable:
- name: Deploy to Azure Functions
uses: Azure/functions-action@v1
with:
app-name: my-function-app
package: ./output
publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_PROFILE }}
Slot swaps for zero-downtime deploys. Deploy to staging slot, test, swap to production. If it breaks, swap back in 10 seconds.
When NOT to use Azure Functions
It doesn't work for everything. Stateful applications? No. Long batch jobs? No. Real-time websockets? Difficult.
It's perfect for:
- Event-driven processing (queue messages, blob uploads, database changes)
- API endpoints with low to medium traffic
- Scheduled jobs (daily reports, cleanup tasks)
- Webhook handlers
Cost perspective
For that image resizing project:
- 50,000 executions/day
- Average 200ms runtime
- 512MB memory
- ~€12/month
A comparable VM would cost €50-100/month, plus time to manage it.
Serverless isn't always cheaper, but it scales better and requires less operational overhead. For a freelancer, that's time that can be spent on client work instead of infrastructure babysitting.
Bottom line
Azure Functions works well when the boundaries are known. It's not a silver bullet, but for the right use cases it saves time and money.
Start small. A simple HTTP trigger or queue processor. Learn how it scales, how monitoring works, how to set up deployment pipelines. Then the fit for the project can be judged.
And if someone tells you serverless "auto-magically" solves everything - run. There's no magic, only trade-offs that need to be made consciously.
