Serving dynamic social preview images with rust and serverless functions
One of the features I've made for several of my games is that when a user shares a link in social media they get a preview image of the level that they were playing. Making this happen is non-trivial and in this post I'm going to explain how
Hi. I'm Mark. I'm a software engineer and among other things I like to make little puzzle games. One of the features I've made for several of my games is that when a user shares a link in social media they get a preview image of the level that they were playing. Making this happen is non-trivial and in this post I'm going to explain how I do it in a generic enough way that you should be able to do something similar for your own websites.
This post has four sections
- An explanation of Open Graph
- How to set up serverless functions in rust
- A method of generating images in rust
- How to write an edge function to point web scrapers at your images
Most of the code in this article is written in rust. But even if you are writing in a different language, a lot of it may still be useful to you.
I use netlify to host my websites. Most of what's in this article is applicable to other hosting providers, especially the serverless functions which are really just AWS lambda serverless functions.
Myriad
I'm going to use my open source game Myriad as an example throughout, so I should probably briefly explain it. It's a maths puzzle game where you try to find every number from 1 to 100 in a 3x3 grid of numbers and math operators. If you're British, it's like Countdown meets Boggle.
When you are playing it looks like this
When you share the link to the main website (https://myriad-game.com) on whatsapp it looks like this:
But when you share the link to a particular level (https://myriad-game.com/game/+1893-216) it looks like this:
You get something similar on most other platforms, though some display a different image in landscape instead.
Open Graph
Open graph is a protocol that lets developers control how their websites appear when shared on social media. It was developed by Facebook but most social media platforms support it.
To use open graph you add <meta>
tags to the <head>
section of your website.
The relevant section for Myriad looks like this
<meta property="og:site_name" content="Myriad" />
<meta property="og:type" content="website" />
<meta property="og:title" content="Myriad" />
<meta property="og:url" content="https://myriad-game.com" />
<meta
property="og:description"
content="Find every number from one to one hundred."
/>
<meta
property="og:image"
content="https://myriad-game.com/icon/og_image_landscape.png"
/>
<meta property="og:image:width" content="630" />
<meta property="og:image:height" content="1200" />
<meta
property="og:image:alt"
content="Myriad: find every number from one to one hundred"
/>
<meta
property="og:image"
content="https://myriad-game.com/icon/og_image_square.png"
/>
<meta property="og:image:width" content="1080" />
<meta property="og:image:height" content="1080" />
<meta property="og:image:alt" content="The Myriad Logo" />
If you look at the WhatsApp preview image again, you can see where the title, url, and description are displayed, and that it is using the square image.
If you were to share it on Facebook you get something similar but with a different image in landscape.
Getting different images to appear on different platforms is slightly fiddly. As I've done above, you have to list multiple
og:image
tags and specify their sizes. WhatsApp seems to always use the last one on the list, whereas Facebook looks for one with the best aspect ratio (1.91:1). This behaviour might change in the future of course.
It can be helpful to test how a website will look on facebook using their handy debug tool
If you just want the social preview images and don't need them to be dynamic, something similar to the above is all you have to do.
This article is about how to get dynamic images to work. At first it might seem obvious: have some javascript (or web assembly compiled from rust) on the page to look at the url, extract the relevant information, generate a corresponding image, and replace the content of the og:image
meta tag with that image as a data uri.
Unfortunately there are two issues with that solution
- The Open Graph protocol doesn't support data uris - you have to provide an actual link to where your image can be downloaded
- When social media platforms scrape your website for the meta tags, they just looks at the raw html and don't run any scripts
The solution to the first problem is to link to a serverless function that generates the images. The solution to the second is to use an edge function to transform the html before the scraper sees it.
Serverless functions
We're going to use a serverless function to dynamically generate the images. Serverless functions are great, because they are cheap, they scale automatically, and you can write them in rust.
You don't have to do much special ceremony to create a serverless function in rust. Just do cargo new
and then add the following dependencies to cargo.toml
[dependencies]
tokio = {version= "1.28", default-features=false}
lambda_runtime = {version= "0.8", default-features=false}
aws_lambda_events = {version= "0.9", default-features=false, features=["apigw"]}
tokio
lets you have an asyncmain
method which you can run async functions insidelambda_runtime
is the runtime for running your functions as a serviceaws_lambda_events
gives you theApiGatewayProxyRequest
andApiGatewayProxyResponse
objects you need. They are behind theapigw
feature.
With these three you can write a working serverless function with idiomatic rust code.
use aws_lambda_events::encodings::Body;
use aws_lambda_events::event::apigw::{ApiGatewayProxyRequest, ApiGatewayProxyResponse};
use aws_lambda_events::http::{HeaderMap, HeaderValue};
use lambda_runtime::{service_fn, Error, LambdaEvent};
#[tokio::main]
async fn main() -> Result<(), Error> {
let f = service_fn(image_request_handler);
lambda_runtime::run(f).await?;
Ok(())
}
async fn image_request_handler(
lambda_event: LambdaEvent<ApiGatewayProxyRequest>,
) -> Result<ApiGatewayProxyResponse, Error> {
let resp = ApiGatewayProxyResponse {
status_code: 200,
headers: HeaderMap::new(),
multi_value_headers: HeaderMap::new(),
body: Some(Body::Text("Hello World".to_string())),
is_base64_encoded: false,
};
Ok(resp)
}
There is a cargo lambda subcommand which can do all sorts of useful things - definitely check it out if you want to do anything much more complicated than this.
Deploying this to netlify is a breeze. You just put it in the right folder netlify/functions/hello
and add a small section in your netlify.toml
that tells it that you're using rust
[context.production]
environment = { NETLIFY_EXPERIMENTAL_BUILD_RUST_SOURCE = "true" }
Once you've done that you can deploy to netlify and see your wonderful creation with the relative url /.netlify/functions/image
. This ugly url is configurable but for simplicity's sake I don't bother as it's only seen by web scrapers.
For reference, the original example that this is based on is here but there have been relevant breaking changes to the
aws_lambda_events
crate since that was created.
Generating Dynamic Images
Now that we've got a basic serverless function we want to actually use it to generate images.
The first thing to do is to extract the relevant parameters from the url.
Extracting Parameters
If the url is https://myriad-game.com/.netlify/functions/image?level=-+23+9751&width=1200&height=630, we first want to extract the "level", the "width", and the "height".
I've written a helper function to extract the parameters
fn get_parameter<'a>(
lambda_event: &'a LambdaEvent<ApiGatewayProxyRequest>,
name: &'static str,
) -> Option<&'a str> {
lambda_event.payload
.query_string_parameters
.iter()
.filter(|x| x.0.eq_ignore_ascii_case(name))
.map(|x| x.1)
.next()
}
and I call it like this
let level = get_parameter(&lambda_event, "level").unwrap_or("____?____");
let width = get_parameter(&lambda_event, "width")
.and_then(|x| x.parse().ok())
.unwrap_or(1080);
let height = get_parameter(&lambda_event, "height")
.and_then(|x| x.parse().ok())
.unwrap_or(1080);
Error Handling
This is the appropriate time for a quick note about error handling.
My philosophy is that if someone sends a request to this endpoint, even if it has a missing or wrong parameter, they should get a valid image back. The only exception is if I, the programmer, have made a mistake. In that case I would rather they see an error message than an incorrect image.
This program should immediately panic if something unexpected goes wrong (e.g. my svg template is invalid). That way I will see an error on the netlify console and the user will just see an image on their share. If, on the other hand, the user decides to try an share an invalid level, I generally just show an image of a big question mark to indicate in a friendly way that they've made a mistake.
Image Templates
For Myriad, the way I dynamically generate the images is that I have a template file in svg format and I change the numbers according to the "level" parameter in the request.
This way, I could, in principal, have an artist work on the template file and make it prettier without having to touch the code.
The way that my program knows which numbers to change is that they are all in text elements with id
attributes.
<text id="text0" x="48" y="64" text-anchor="middle"
style="font-family:Inconsolata;font-size:50px;line-height:1;stroke:#1f1b20;stroke-width:0.3"
>0</text>
I actually have two template files, one for square images and one for landscape but I've decided to elide that here for simplicity.
Building the Tree
To do the actual svg manipulation and convert it to png, I use resvg
resvg = { version = "0.34", default-features = false, features=["text"] }
I then embed the template in the binary (just put it in the src
folder and use include_bytes!
) and use it to make my svg tree.
fn draw_image(chars: [char; 9], width: u32, height: u32) -> Vec<u8> {
let opt: resvg::usvg::Options = Default::default();
let bytes: &'static [u8] = include_bytes!("template_square.svg");
let mut tree = Tree::from_data(bytes, &opt).expect("Could not parse template");
todo!()
}
After that, I go through and replace all the text nodes (which have ids "text0", "text1", etc.) with the correct characters (not shown is my incredibly ugly function mapping value of the "level" parameter to an array of nine characters).
for (index, character) in chars.into_iter().enumerate() {
let id = format!("text{}", index);
let node = tree
.node_by_id(id.as_str())
.expect("Could not find node by id");
if let NodeKind::Text(ref mut text) = *node.borrow_mut() {
text.chunks[0].text = character.to_string();
} else {
panic!("Node was not a text node")
};
}
Text Handling
I'm now almost ready to render the svg
tree and return it, but because of the way resvg
works, I first need to convert all the text nodes to paths.
You may have noticed the font-family:Inconsolata; style
in the svg. That's the font I want. It was designed by Ralph Levien to demonstrate that "monospaced fonts do not have to suck". Luckily, it's available on google fonts and, like the image templates, I can embed it in the binary. From there, applying it to the svg file is straightforward (it does require the text
feature of resvg
though).
let font_data: Vec<u8> = include_bytes!("Inconsolata-Regular.ttf").to_vec();
let mut font_database: fontdb::Database = fontdb::Database::new();
font_database.load_font_data(font_data);
tree.convert_text(&font_database);
The Transform
The final wrinkle before rendering is to think about dimensions. Because the template might not be the same size as specified in the request, I need to scale it. Ideally it will have the same aspect ratio, but just in case it doesn't I apply a translation to center it.
let x_scale = width as f32 / tree.size.width();
let y_scale = height as f32 / tree.size.height();
let scale = x_scale.min(y_scale);
let tx = (x_scale - scale) * 0.5 * tree.size.width();
let ty = (y_scale - scale) * 0.5 * tree.size.height();
let transform = resvg::tiny_skia::Transform::from_scale(scale, scale).post_translate(tx, ty);
Rendering and Returning
Now at last I can render the image as a png
let mut pixmap = resvg::tiny_skia::Pixmap::new(width, height).expect("Could not create Pixmap");
resvg::Tree::render(
&resvg::Tree::from_usvg(&tree),
transform,
&mut pixmap.as_mut(),
);
return pixmap.encode_png().unwrap();
png is generally a better format than jpeg for procedurally generated images like these.
The final thing to do is to add the image data to the response
let data = draw_image(chars, width, height);
let resp = ApiGatewayProxyResponse {
status_code: 200,
headers,
multi_value_headers: HeaderMap::new(),
body: Some(Body::Binary(data)),
is_base64_encoded: true,
};
Testing
Obviously, I'm not done. I haven't written any tests! I especially want tests that fail if the template changes or something else causes them to look different. For this I use snapshot testing, for which rust has a library and cargo subcommand called insta. Snapshot testing isn't actually anything to do with images, it lets you assert that a reference value produced by your test hasn't changed since the last time the test ran successfully. If it does change you can check that the result is still correct and manually accept the change.
For example, I could change the background color in my template svg. If I then ran the tests they would fail because the reference values are different. I could then manually check the images and if they still look good I can run cargo insta accept
to update the reference values. After that the tests will pass again.
In this instance, the reference values I want to use are the hashes of the images. I could use the whole image but the .snap files that are generated would be a bit large for me to want to put them in source control. I do write the images to my local file system so I can visually validate them but I have told git to ignore those. If you do something like this yourself, make sure to write the images before you validate the hashes or the tests will fail before they can be written, the old image files will remain, and you will validate the wrong thing. Yes. I did this.
Below is my test code. I am using ntest for the test_case
attribute which makes life a lot easier.
#[cfg(test)]
mod tests {
use crate::*;
use ntest::test_case;
use std::hash::{Hash, Hasher};
#[test_case("+1-5-2495", 1200, 1080)]
#[test_case("+1-5-2495", 1080, 1200)]
#[test_case("+1-5-2495", 1080, 1080)]
#[test_case("+1-5-2495", 1200, 630)]
#[test_case("invalid", 1080, 1080)]
#[test_case("invalid", 1200, 630)]
fn test_image(level: &str, width: u32, height: u32) {
let chars = map_chars(level);
let data = draw_image(chars, width, height);
let len = data.len();
let path = format!("image_{level}_{width}x{height}.png");
std::fs::write(path, data.as_slice()).unwrap();
let hash = calculate_hash(&data);
insta::assert_debug_snapshot!(hash);
assert!(len < 300000,"Image is too big - {len} bytes");
}
fn calculate_hash<T: Hash>(t: &T) -> u64 {
let mut s = std::collections::hash_map::DefaultHasher::new();
t.hash(&mut s);
s.finish()
}
}
I also test that the file size of images is not too large. Mine come out at about 200kb and I believe 300kb is the maximum size that whatsapp supports.
That's all on generating the images. The final piece of the puzzle is to use edge functions to direct the social media scrapers at the correct image urls.
Edge Functions
Edge functions are serverless functions that run on the server closest to the user. Basically whenever someone loads your website you can run a function that changes the html they see based on their location. We're not interested in their location here but we are interested in the url path.
Basically we want to look at the url, extract the level information and use that to change the og:image
tag before the whatsapp scraper gets to it.
Unfortunately support for rust edge functions isn't brilliant just yet, at least on netlify. You can actually write them in rust, but then you have compile your code to wasm and inline the compiled bytes as a javascript array. I flat out refuse to do this.
So javascript it is. I've written my edge function to be as simple as possible - all it does is a few string replacements on the page html.
export default async (request, context) => {
const url = new URL(request.url);
const response = await context.next();
let page = await response.text();
try {
const game = url.pathname.substring(6); //remove the '/game/' from the pathname
page = page.replace(
`https://myriad-game.com/icon/og_image_square.png`,
`https://myriad-game.com/.netlify/functions/image?level=${game}&width=1080&height=1080`
);
page = page.replace(
`<meta property="og:url" content="https://myriad-game.com"`,
`<meta property="og:url" content="https://myriad-game.com/game/${game}"`
);
return new Response(page, response);
} catch {
return response;
}
};
In addition to updating the image link I am updating the og:url
tag. If I didn't do that, none of this would work.
The og:url
is used as an id for the page. That means if two pages have the same id, they will have the same image. That's great for ignoring things like ref parameters that track how you got to the page but it's a problem if https://myriad-game.com/game/+1893-216 has the same id as https://myriad-game.com as they would then have the same image.
I can't speak for other platforms, but deploying an edge function on netlify is a breeze. The code above is in \netlify\edge-functions\og-param-proxy.js
and in my netlify.toml
I have the following. That's all I had to do.
[[edge_functions]]
function = "og-param-proxy"
path = "/game/*"
There's also a netlify cli tool that lets you test locally. Once installed you can do netlify dev
and it will run your site and you can use your browser tools to look at the html and check it's correct. I do recommend doing this as those string replacements in javascript can be fiddly. Trunk, the web application bundler for rust, will change your html slightly so make sure to copy the pattern strings out of the generated index.html rather than the one you actually write.
Anyway, I hope you found this guide helpful. Please let me know if I've made any mistakes or if you managed to set this up yourself.