AKA “How to Varnish like a Boss”
Angry Creative provides its own hosting solution to make sure we can provide the most value and minimise issues in our projects and ongoing work with our clients. This product we call “Synotio” and is its own legal entity. In this article, our infrastructure expert Toni Cherfan shares how we configure Varnish caching to work with WordPress and WooCommerce.
Why?
Compute capacity is and will always be a limited resource. Back in the 1960s during the NASA Apollo program that brought the US to the moon, the Apollo Guidance Computer (AGC) famously experienced two “program alarms”, namely the 1201 (Executive Overflow – No core sets) and the 1202 (Executive Overflow – No VAC areas). They were caused by the astronauts leaving the radar in SLEW mode, which flooded the AGC with interrupt signals that prevented it from performing all the tasks it needed to do: it simply could not process the data it was receiving at a high enough speed to keep up. You can read more about it here.
The consequence of limited compute capacity can be thought of as having a pool of resources. It is illustrated as a pie chart below.
⚠️ These pie charts are entirely arbitrary and do not represent any real data!
As a developer, it is your responsibility to make efficient use of these resources. Varnish is here to help you with this task.
How?
On an uncached site, the pie chart above is highly influenced by user traffic patterns, but also by design choices. For example, if you decide to load half of the page using AJAX, the resource pool now looks like this:
The purpose of a HTTP frontend cache is to offload the backend by serving the result from one request to multiple users.
- On the first pageview, the cache is essentially invisible. It looks up the request in the cache data store and finds nothing. The request is therefore passed to the backend web server but on its way back to the user, it’s stored in the cache data store as well.
- On subsequent requests for the same page the cache looks up the request the exact same way as last time, but this time it finds an entry for the requested page. The response is therefore served to the next user requesting the page and so on.
This represents a significant saving in resources because instead of having to parse server side scripts, make database requests and compile the page that’s been requested, the cache simply re-serves the precompiled page. It’s the difference between having to run a program and just serving a static file; a huge reduction in computing resources needed.
But of course, there are lots of situations where serving the exact same page to several users could be bad. Let’s look at that.
Dealing with cookies
The purpose of cookies is to influence the server’s behavior when creating responses. In the case of user sessions, the cookie contains a unique identifier for the logged in user. By definition means that a response that was requested with a cookie cannot be served to multiple users since it was generated for a single user. A request with a cookie can therefore seem uncacheable.
In the real world, all kinds of things set cookies. One such example is Google Analytics and other tracking software. It is, therefore, improper to assume that just because a request has cookies, it is also uncacheable. A workaround is to force the request to be cacheable by removing the user-specific parts of the request, or in clear text: unset all cookies. In Varnish Configuration Language (VCL) code this is done using something like:
sub vcl_recv
{
unset req.http.Cookie;
}
sub vcl_backend_response
{
unset beresp.http.Set-Cookie;
}
So what’s the problem with applying the above? The site would be unable to generate user-specific content! This is almost fine on a CMS site, with the exception that you would not be able to use the admin since the admin requires sessions. On an e-commerce site the problem gets a lot worse since the cart and checkout is by definition user-specific. They would not work without cookies. Therefore we need a mechanism to make exceptions to this. This is discussed in the VCL section below.
🚨 Some REALLY bad cache implementations ignore cookies present in the request and just cache the response anyway. The result is a disaster where the admin, checkout and cart might be cached with data from other users. 🚨
⚠️ There are actually lots of different ways to handle cookies. For example the Varnish default implementation is to just skip caching of requests with cookies on them and you could in theory make a dynamic way of handling such requests by receiving a header from the backend that signifies whether or not the page is uncacheable, restart the request pipeline and set the currently processed URL as a hit for pass object. There’s no specific benefit to the way we do things, it’s just a design decision.
What about speed?
A side effect of caching HTTP responses is that it’s a lot faster to deliver a rendered response from memory than rendering that response on the backend. Computers are very good at copying data, and in the case of Varnish we just need to copy a memory location to a network socket so that the browser can receive it. This is orders of magnitude faster than starting up PHP, loading WP and chugging through the site code. However, this is not our main goal!
Our goal is to free up backend resources so they can be put into better use. Take our resource pool again:
We’ve already established in the cookie section above that the admin and the checkout is uncacheable. That leaves the pages as cacheable. Assuming a 90% hitrate on the pages, this would result in a new resource pool like this:
This is what we want to accomplish!
In this example, we can now increase the visitor capacity by more than double and still have more room to spare for other things. Varnish has enabled us to do more with less.
⚠️ This guide is mostly directed at developers, not marketing per se! In the marketing world, it’s much easier to sell Varnish as something that speeds up your site rather than something that gives you better resource utilization. This is however only a side effect albeit a valuable one!
A word about hashing
Think of the cache store as a table of key-value pairs. The request is the key and the value is the content of the response. The request usually involves things like:
- HTTP scheme (is the request
http
orhttps
?) - HTTP method (a
GET
request is different from aHEAD
request, which in turn is different to aPOST
) - URL
- Site domain name
ℹ The query string is a part of the URL that looks like ?search=socks
– it often looks something like this as part of a whole URL: www.yoursite.com/?search=socks
However, storing all this information in the key would result in lots of wasted space since just a URL itself can be up to 2048 bytes long! Therefore, the data is first passed through a hashing algorithm that is outside the scope of this page (you can read about it here) and is compressed into a lot less space. This takes place during vcl_hash
(see below) and the hash_data()
function allows us to add more data to the hash key.
Varnish logic (VCL)
Varnish Configuration Language (VCL) is a programming language in itself. The syntax and functions are beyond the scope of this document (you can read more about them here), but the concepts below should be illustrative enough to be readable by most programmers.
To understand how Varnish does things and how the Synotio default VCL works, we need to look at the Varnish request pipeline:
This looks complicated, but there are lots of steps we don’t need to care about that just happen automatically. Let’s start from the top:
vcl_recv
This step is invoked once the request has been received by Varnish. It is at this stage we decide whether or not the request should be cacheable.
The standard Synotio vcl_recv
looks like follows:
sub vcl_recv {
## Handle PURGE requests differently depending if we're purging an exact URL or a regex, set by the varnish-http-purge plugin
## See the purging section below for more information on this
if (req.method == "PURGE") {
if (req.http.X-Purge-Method ~ "(?i)regex") {
call purge_regex;
} elsif (req.http.X-Purge-Method ~ "(?i)exact") {
call purge_exact;
}
else {
call purge_exact;
}
return (purge); ## Terminate the request. There is no point in sending it to the backend since it's meant for varnish.
}
set req.backend_hint = cms_lb.backend(); ## Set the backend that will receive the request
set req.http.X-Forwarded-Proto = "https"; ## Force the backend to believe the request was served using HTTPS
if (req.url ~ "(wp-login|wp-admin|wp-json|preview=true)" || ## Uncacheable WordPress URLs
req.url ~ "(cart|my-account/*|checkout|wc-api/*|addons|logout|lost-password)" || ## Uncacheable WooCommerce URLs
req.url ~ "(remove_item|removed_item)" || ## Uncacheable WooCommerce URLs
req.url ~ "\\?add-to-cart=" || ## Uncacheable WooCommerce URLs
req.url ~ "\\?wc-(api|ajax)=" || ## Uncacheable WooCommerce URLs
req.http.cookie ~ "(comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in)" || ## Uncacheable WordPress cookies
req.method == "POST") ## Do NOT cache POST requests
{
set req.http.X-Send-To-Backend = 1; ## X-Send-To-Backend is a special variable that will force the request to directly go to the backend
return(pass); ## Now send off the request and stop processing
}
unset req.http.Cookie; ## Unset all cookies, see the "Dealing with cookies" section
}
vcl_hash
This step is mostly unused by our systems, but there are a few edge cases where it’s needed. Imagine you have a global store where you need to apply different VAT cases depending on the location of the visitor. In this case we need to add data to the hash key in order to make sure that the GeoIP country of the visitor is taken into account since the backend will produce different responses based on it. For example:
sub vcl_hash
{
hash_data(req.http.X-GeoIP-Location); # We assume here that the visitor location is stored in a header called X-GeoIP-Location
}
Without this statement, the view of the first user will be cached and sent out for all users. This has the potential of caching the wrong data if the backend creates different responses depending on variables that Varnish does not take into account.
🚨 This example is quite bad due to cache fragmentation. There are 195 different countries in the world at the time of writing. Caching 195 different versions of the same URL would likely decrease your hit rate to almost nothing (refer to the TTL section for info about hitrates). In the real world, we would bind lists of countries to VAT percentages and hash on those instead.
vcl_backend_response
This step is invoked after the request has been sent to the backend and the backend has responded. Events that lead to this are either a return(pass);
action in vcl_recv
, or if the requested entry was not found in the cache. The purpose is to set the cache TTL and do header processing.
The standard Synotio vcl_backend_response
looks like this:
sub vcl_backend_response {
if ( beresp.http.Content-Type ~ "text" )
{
set beresp.do_esi = true; ## Do ESI processing on text output. Used for our geoip plugin and a few others.
## See https://varnish-cache.org/docs/6.1/users-guide/esi.html
}
if ( bereq.http.X-Send-To-Backend ) { ## Our special variable again. It is here that we stop further processing of the request.
return (deliver); ## Deliver the response to the user
}
unset beresp.http.Cache-Control; ## Remove the Cache-Control header. We control the cache time, not WordPress.
unset beresp.http.Set-Cookie; ## Remove all cookies. See the "Dealing with cookies" section above
unset beresp.http.Pragma; ## Yet another cache-control header
## Set a lower TTL when caching images. HTML costs a lot more processing power than static files.
if ( beresp.http.Content-Type ~ "image" )
{
set beresp.ttl = 1h; ## 1 hour TTL for images
}
else {
set beresp.ttl = 24h; ## 24 hour TTL for everything else
}
}
vcl_deliver
This step is invoked just before the response is delivered to the browser. It is used to perform post-processing after we have a response to send. The Synotio standard vcl_deliver
looks like this:
sub vcl_deliver {
if (obj.hits > 0) { ## Add the X-Cache: HIT/MISS/BYPASS header
set resp.http.X-Cache = "HIT"; ## If we had a HIT
} else {
set resp.http.X-Cache = "MISS"; ## If we had a MISS
}
if (req.http.X-Send-To-Backend) ## Our special variable. Signifies a hardcoded bypass
{
set resp.http.X-Cache = "BYPASS"; ## If we had a BYPASS
}
unset resp.http.Via; ## Remove the Via: Varnish header for security reasons. We don't want to expose that we run Varnish.
unset resp.http.X-Varnish; ## Remove the X-Varnish header for security reasons. This would otherwise expose the Varnish version.
}
Designing for caching
When writing code, you need to keep in mind what audience your HTTP response is aimed at:
- Is it aimed at a specific user? If so, it is uncacheable.
- Is it supposed to be served to multiple users? If so, it is probably cacheable.
- Does it have any specific conditions attached to it that alters the output based on those conditions? If so, it probably needs special configuration.
A nice thing about JavaScript is that it scales 100% linearly with the amount of users on the site. Each browser adds additional local compute power, therefore you will never run into server load issues if your code runs in the browser. A good comparison of designing for caching vs not doing it is a currency switcher:
Case 1: The browser sends a cookie to the server with a session ID. The currency is stored in the PHP session and the site outputs different pricing depending on what is stored in that session.
Result: This would make almost the entire site uncacheable since all pages with prices on them will expect to receive user-specific data (the session cookie).
Case 2: The browser sends a cookie to the server with the selected currency in clear text (SEK/ZAR/USD/EUR/whatever). The site reads the cookie and outputs different pricing depending on what is stored in the selected currency cookie.
Result: This requires special configuration and is hard to implement. Instead of removing the cookies entirely, we must parse the cookie field, then remove all cookies in it except the currency switcher cookie in vcl_recv
, then add the content of the result in vcl_hash
while also ensuring that the content of this cookie reaches the backend. This is doable but error-prone. Troubleshooting this scenario can quickly become a nightmare. On top of that, the backend needs to redo processing for each case of a new value for the currency switcher cookie. This adds additional server load since now each page can be rendered in X different ways, where X is the amount of possible currencies.
Case 3: The server outputs a JSON array with all prices on the page in the different currencies. The displayed price is selected in JavaScript based on a value in local storage (you can use cookies for this, just don’t read them from the backend). The page content remains the same from a Varnish perspective but the page is adapted on load (in JavaScript) to suit the visitor. A nice side effect of this approach is that switching to a new currency would not require a reload of the page if implemented properly. The server also does not need to render the same page many times to output different currencies, thereby decreasing load.
Result: Awesome!
Troubleshooting cookies
Your main tool for troubleshooting requests involving cookies using this config is the X-Cache
header. If it is not present at all, the site does not currently appear to be cached using Varnish (configured as described in this article), you may need to check your server configuration.
When troubleshooting, you must know the URLs involved and what kind of data that should be sent back and forth. For example, take a site that always unsets cookies.
During login, you do a POST
to /wp-login.php
with the user credentials. If the credentials are correct, the server must reply with a Set-Cookie
header representing the session ID of the user. If we unset all cookies, the Set-Cookie
header would be removed and the next URL would therefore not see any cookies coming from the browser that the backend just sent.
If we alter the above scenario slightly where we still unset cookies but only do so on GET
requests, we get a different behaviour: After POST
to /wp-login.php
, the backend successfully sends a Set-Cookie
header to the browser. The browser stores the cookie and sends it on the next request, which is a GET
. The backend now responds with a redirect back to the login page because even though the browser sends the cookie, Varnish will remove it before passing the request to the backend. Remember, GET
requests are by default cacheable and therefore cannot have user-specific data like cookies unless they are coded into vcl_recv
as such.
If we alter the above scenario again, but this time we code an exception for wp-admin
into vcl_recv
, we get a working WP Admin. The POST
to /wp-login.php
sets a session identifier as a cookie. The browser receives the session identifier and proceeds with a GET
request to /wp-admin
containing the cookie. Varnish knows that this request is uncacheable and therefore passes the request as-is to the backend. The backend responds with the WP Admin dashboard since the session identifier was valid.
We use the X-Cache
header to represent the different scenarios:
- The output contains an
X-Cache: BYPASS
header: Varnish determined, usingvcl_recv
, that the request was uncacheable and therefore should be passed as-is directly to the backend. - The output contains an
X-Cache: HIT
header: Varnish determined, usingvcl_recv
, that the request was cacheable and looked up the request in the cache store. The result was found and served from the cache store without involving the backend. - The output contains an
X-Cache: MISS
header: Varnish determined, usingvcl_recv
, that the request was cacheable and looked up the request in the cache store. The result was not found and was fetched from the backend and then stored in the cache store. The next result will beHIT
.
Most troubleshooting you encounter will likely be related to uncacheable requests that are treated as cacheable when they are in fact not. Make sure you determine the above information as part of your troubleshooting process. You can also manually set cookies in your browser that will force a cache bypass, such as the wordpress_logged_in
cookie. See the vcl_recv
section above for possible cookie names.
TTL
ℹ A URL that has been stored in the Varnish cache store is called an object.
The Time To Live (TTL) value set in vcl_backend_response
(see the VCL section above) sets how long objects will remain in the cache storage. It is possible for objects to be replaced prematurely if they are either banned (purged) or if there is no space in the cache storage and new objects need to be added. Varnish will prioritise what objects to keep in the storage based on how frequent they are accessed. Objects that are accessed more often will have a higher priority of remaining in the storage compared to lesser accessed objects.
When the TTL expires, the object isn’t immediately removed from the cache storage. Rather it’s marked as expired and will remain so until a new object is able to replace it. When an expired object is accessed during the lookup phase (see the Varnish request pipeline above), there are different possible design patterns that can be applied:
- Fetch a new object from the backend to replace the expired object. This is how our Varnish implementation currently works under normal conditions.
- Serve the stale (expired) object but fetch the new object from the backend in the background. This enables Varnish to instantly serve a request without needing to wait for the backend. The downfall of this method is obviously that we are serving stale objects. We transition to this method if the Varnish health check has marked the backend as down. The alternative would be to serve a HTTP 503 instead.
It is possible to apply case 2 before the TTL has expired, for example if the remaining TTL of the object is less than 30 minutes. This is useful for enabling Varnish to refresh objects in the background while respecting the TTL of the objects and we might transition to this method in the future.
The (almost) infinite TTL
The problem with low TTLs is obvious: they require us to fetch objects from the backend more frequently, thereby increasing the load on the backend and decreasing the efficiency of the cache. This efficiency is expressed as hitrate, which is the percentage of how many requests in a given timeframe have been served out of the cache. A 50% hit rate means that half of the requests were served out of the cache and the other half was directed to the backend.
As you increase the TTL however, the problem with serving stale data grows. This results in frustration for content editors since they now need to wait for the TTL to expire before the users see their content – or even worse, users seeing products as in stock when they are in fact not. This usually results in requests to lower the TTL or get rid of the cache completely. We don’t want to do this.
What if we could have the best of both worlds? The Synotio standard TTL is 24 hours, but there are reasons to sometimes increase this even further. It is therefore, for practical purposes, infinite. With such a long TTL, waiting for the object to expire is not an option. Therefore, we require work from the application layer (WordPress/WooCommerce) to deal with this problem.
Purging
There are only two hard things in Computer Science: cache invalidation and naming things.
Phil Karlton
Cache Invalidation, or purging, is the process telling Varnish when the content on a URL has been updated. This is the purpose of the varnish-http-purge
plugin. By automatically invalidating the relevant URLs, we achieve two things:
- The TTL doesn’t matter for purpose of seeing changes since they will be shown instantly.
- We only need to purge parts of the cache, not the whole cache.
🚨 When users get the option, they usually purge the whole cache storage to be sure their content has been purged. The action of purging the whole cache has drastic performance costs and may even end up taking down the backend if the load is high enough since the backend will be bombarded with requests for object fetches as the cache data store is empty. In the case of large sites, it might take days or weeks for the cache store to fully populate again.
🚨 If you encounter stale data on the site, it’s tempting to use the ‘Purge’ button in the CMS. We don’t recommend this, instead see this as an issue that should be submitted as a support ticket so that we can troubleshoot and solve it.
How we practically go about purging is sending a HTTP PURGE
request as follows:
PURGE /my/example/resource HTTP/1.1
Host: example.com
X-Purge-Method: Exact
The above request will purge the /my/example/resource
URL, matching the string exactly. The value of X-Purge-Method
can be either exact
or regex
and is case insensitive. If this header is absent, vcl_recv
will fall back to exact
purging.
For regex queries, the request looks like follows:
PURGE /my/example/.* HTTP/1.1
Host: example.com
X-Purge-Method: Regex
The above request will purge every URL starting with /my/example/
from the cache storage.
The purge logic itself is implemented in the vcl_recv
section above, but repeated below for clarity with the inclusion of the called subroutines:
sub vcl_recv {
## Handle PURGE requests differently depending if we're purging an exact URL or a regex, set by the varnish-http-purge plugin
if (req.method == "PURGE") {
if (req.http.X-Purge-Method ~ "(?i)regex") { ## Check if X-Purge-Method matches "regex", case insensitive
call purge_regex;
} elsif (req.http.X-Purge-Method ~ "(?i)exact") { ## Check if X-Purge-Method matches "exact", case insensitive
call purge_exact;
}
else {
call purge_exact;
}
return (purge); ## Terminate the request. There is no point in sending it to the backend since it's meant for varnish.
}
}
sub purge_regex {
## Construct a ban (purge) command in this way:
## ban req.url ~ "/my/example.*" && req.http.host == "example.com"
ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host);
}
sub purge_exact {
## Construct a ban (purge) command in this way:
## ban req.url == "/my/example/resource" && req.http.host == "example.com"
ban("req.url == " + req.url + " && req.http.host == " + req.http.host);
}
Troubleshooting purging
If the application does not send PURGE
requests to Varnish on all actions that change data in cacheable URLs, those objects will not be updated and will show stale data until the TTL for the involved objects expires. When troubleshooting this problem, there are two headers to help you:
X-Cache: HIT
For cached data to be stale, it must per definition have been stored in the cache in the first place. If you’re getting any other value but the data is still stale, this is a sign of a problem with the backend, not Varnish.Age: <number>
The Age header specifies how long (in seconds) the object has been stored for. You should see this value increase on subsequent requests. This also permits you to determine how long the data has been stale for.
When issuing a PURGE
for a URL – the next subsequent request for that URL will become a cache miss, setting the X-Cache
header to MISS
.
On sites with high traffic there is a chance that someone requested the URL between the time you purged it and you were able to refresh your browser. In this case, your response will be a cache hit but the purge will still have been issued correctly. Therefore, look at the Age
header before and after your purge action (regardless of whether it was manually initiated or not): it should go from a high value to a low value upon purge since the object has been fetched more recently from the backend.
If the value of the Age
header does not drop when initiating a purge action, it’s a sign of the application either not trying, or failing when trying, to send PURGE
requests to Varnish.
Edge Side Includes (ESI)
ℹ The official Varnish documentation on ESI is located here. It is highly recommended that you read it before doing ESI implementations.
Varnish provides the ability to include the content of other URLs by parsing the content of the response if beresp.do_esi = true
(see vcl_backend_response
in the VCL section). Take this content for example:
<html>
<body>
<esi:include src="/README.txt" />
</body>
</html>
The content of /README.txt
will be included inside the body block.
ESI can also remove data!
🚨 Even though ESI can remove data, don’t use it for sensitive stuff. Any instance where the page is rendered without ESI support will immediately expose it.
<html>
<body>
<esi:remove>
This is a secret message that the browser will never see if ESI support is enabled :)
</esi:remove>
Hello!
</body>
</html>
Once processed by Varnish, the above will output:
<html>
<body>
Hello!
</body>
</html>
You can also test whether or not ESI support is available. This is useful for controlling fallback behaviour. Let’s take this for example:
<html>
<body>
<esi:remove>
<script>
var is_esi_supported = false;
</script>
</esi:remove>
<!--esi
<script>
var is_esi_supported = true;
</script>
-->
<script>
console.log("Was this page rendered with ESI support? ", is_esi_supported);
</script>
</body>
</html>
If the page was processed by the Varnish ESI engine, it will look like this:
<html>
<body>
<script>
var is_esi_supported = true;
</script>
<script>
console.log("Was this page rendered with ESI support? ", is_esi_supported);
</script>
</body>
</html>
If ESI support was not available, the output will be unaltered. The browser will treat <esi:remove
> as an unsupported element and will just continue processing the HTML inside it. The <!--esi -->
tag will be processed as a comment and therefore the content inside it will not be evaluated.
You can also nest content inside the <!--esi -->
tag:
<html>
<body>
<!--esi
<p>Below will do an ESI include:</p>
<esi:include src="/README.txt" />
-->
This is the only text on the page if rendered without ESI support.
</body>
</html>
The following three tags are available:
<esi:remove>
content</esi:remove>
removes content from the page<!--esi -->
will remove the leading and trailing tags, e.g.<!--esi
and-->
but process the content inside them.<esi:include src="url" />
includes the content of another URL
🚨 Varnish by default only does ESI processing on HTML or XML content (e.g. pages). Don’t try to do it in JS/CSS files.
🚨 Varnish can only read HTTP (non-SSL) URLs. It does not speak HTTPS/SSL. To work around this, make sure to only use ESI to include resources using relative paths. E.g:
/README.txt
not
https://example.com/README.txt
What can you do with ESI? Here’s a couple of ideas:
- Have different TTLs or cache keys on parts of the same page
- Include small dynamic content. For example, rather than ask the backend for the GeoIP location through an uncacheable AJAX request, embed it with an
esi:include
. This is mainly useful if the URL in theesi:include
tag is a synthetic response with special logic invcl_recv
for it. - Embed SVG content into HTML without having to process it through PHP
- Do partial caching, or fragment caching, to divide a single page into different cacheable parts
- Purge only parts of a page without having to re-render all of it, for example changing the content of a header without having to purge all pages using that header.
- Have multiple product pages with each product card being an ESI include. You could purge a single product card without having to purge the product pages. This would also save you from having to look up all the pages a product is referenced to when purging that product.
Conclusion
Managed to read all of this? Give yourself a cookie 🍪
Does VCL float your boat? Are you excited to make WordPress faster than seems natural and proper? Why not join our team!
Are you looking for a WooCommerce expert partner to help make your site faster? Varnish is just the start of what we can do – contact us.
You may also be interested in these articles
Brexit for WooCommerce sellers
The impact of Brexit is huge. There are very many implications for those in e-commerce businesses…
Read moreBrexit for WooCommerce sellers
Cynefin: a valuable framework to classify, communicate and respond to tasks in digital projects
Cynefin. At Angry Creative we think it’s a vital part of digital projects. It helps us to underst…
Read moreCynefin: a valuable framework to classify, communicate and respond to tasks in digital projects
Web fonts for WordPress
Fonts control how your text is displayed – how the letters actually look. web-safe fonts. W…
Read moreWeb fonts for WordPress
Subscribe to our newsletter for tips, inspiration and insight about WordPress and WooCommerce and the digital world beyond.
Time to take the next step towards a more effective website?
Contact us, and we can talk more about how we can take your business to the next level together.