Building a photography website
Last year, I started a photography hobby. Soon after, I've created a place where I can share some of my work, without any attention-driven algorithms dictating the terms. Here's a technical write-up of my journey.
Motivation
Why should one build a photography website in the first place?
Similar to this blog, I want to reduce my dependency on social media and third-party services. Even if I ignored Instagram's privacy issues and dark patterns, the fact they can delete my account at a whim is unacceptable. In fact, they shadow-banned my first Instagram account because of "suspicious activity". Pixelfed, a federated photo sharing platform, solves many issues of Big Tech. However, I can't guarantee the instance I chose to stand the test of time. Glass has potential, but there are some issues like no way to provide an image alt description.
Instead of relying on third-party providers alone, I practice POSSE: I publish photos on my own site and syndicate them to my Pixelfed, Glass and Instagram accounts. This approach brings some further advantages:
- Anyone can access my photos, without the need for an account.
- People can subscribe via RSS to view my latest photos without an algorithm-driven feed.
- I can use a custom, personalized design.
- I can update my photos, e.g., make changes after getting more experienced with RAW development and photo editing.
- I can use any aspect ratio. Instagram will often crop the image preview despite choosing "original size" on upload.
- I own my content.
Inspiration
Most photography websites are from professional artists who provide a showcase for potential clients. I was looking for a more personal look and feel. Fortunately, some fellow software developers share a photography hobby: Nicolas Hoizey (my favorite site!), Moritz Petersen, Artem Sapegin, Florian Ziegler, Evan Sheehan, Greg Morris, Alan W. Smith, Hidde de Vries (write-up), Shom Bandopadhaya, Jens Comiotto-Mayer, Matthew Howell, Paul Stamatiou, Jesper Reiche, Jamie Dumont, Chuq von Rospach and Matze. All those websites helped me narrow down the features I want for my own portfolio website.
Design
As mentioned, I don't want to convey a business feeling. I want a personal website that sparks joy. I've decided to go for a pinboard design:
- The background mimics a corkboard.
- The photos get a big white border, simulating printed Polaroid photos.
- There are three variants of a sticky tape effect.
- The website uses a handwriting font ("Itim").
Previously, I've rotated the photos randomly on every build for some chaos. I loved the effect, but the rotation required the pictures to be interpolated, making them slightly blurry.
I'm sure this design will change over time — check photos.darekkay.com for the current state.
Implementation
Let's get a glimpse into the technical side.
Content management
Similar to this blog, I went with the Eleventy static site generator.
I've decided to index my photos, starting at 0001
. For each photograph that I publish, there are five files:
📁 content
└─ 📁 photo
└─ 📁 0001
├─ 📄 0001.11tydata.json
├─ 📄 0001.md
├─ 🖼️ 0001-medium.jpg
└─ 🖼️ 0001-small.jpg
└─ 🖼️ 0001-small.webp
The 11ydata.json
file contains the photo metadata. The Markdown file contains the actual content: title, alt description, location, publish date and a short text. The small.webp
and medium.jpg
images are used for the gallery and the preview page respectively. The small.jpg
file is used as social image preview card, as WebP support is still lacking.
photos.json
to store both the metadata and content. Using Eleventy pagination, I didn't have to create dedicated markdown files. As I've started adding short descriptions to my photos, this workflow was no longer viable. But it's still a good alternative for basic photo galleries. Make sure to also check Evan's approach to use the image files as Eleventy templates.I store the medium
and small
images as part of the project's Git directory. Git isn't the best choice for storing binary files, but it doesn't cause any bottleneck, yet. Each photo triplet is on average 350 kB big. I could use Git LFS, but it's not worth the effort for now.
Loading performance
I've put much thought into the loading behavior to ensure a good user experience even on slow networks. I also wanted to avoid pagination and infinite scroll for the image gallery.
First, every image provides its width and height to prevent layout shifts.
Second, I use lazy loading, a performance strategy to load resources (like images) only when needed. Fortunately, most browsers support native lazy loading for images:
<img loading="lazy" [...] />
Third, the gallery serves WebP images at 75% quality. This saves around 50% of space compared to the original 85% JPEG file. I accept the quality loss for the gallery, but I still use full quality JPEG files on the individual photo pages.
The last technique is to provide a good fallback while images are still being loaded. I provide two fallbacks:
- A fixed background color.
- BlurHash, a compact representation of an image placeholder (also called "low-quality image placeholder").
While the JavaScript BlurHash script (1.6 kB) is loaded, we see a fixed background color:
After the script has been loaded, the BlurHash placeholder is applied:
Finally, we see the actual image:
Not everyone has JavaScript, despite many developers ignoring this fact. The nice thing about BlurHash is that it's progressively enhanced: without JavaScript, only the fixed-color fallback will be displayed.
How do we test the loading behavior? Browser network simulations are useful, but not so much for local images, as they'll still load almost instantly. Instead, I've created custom Eleventy middleware to delay image loading artificially during testing.
Navigation
On each page, I've placed a link to the "previous" and "next" photo. I've implemented this using a custom Eleventy collection:
eleventyConfig.addCollection("photos", (collection) => {
const photos = filter("content/photo/**")(collection);
for (let i = 0; i < photos.length; i++) {
const prevPost = photos[i - 1] || photos[photos.length - 1];
const nextPost = photos[i + 1] || photos[0];
photos[i].data["previousPost"] = prevPost;
photos[i].data["nextPost"] = nextPost;
}
return photos;
});
I can then access the page URLs in my layout file:
<a href="{{ previousPost.url }}">Previous photo</a>
<a href="{{ nextPost.url }}">Next photo</a>
RSS
I consider RSS a must for any blog-like website. A photo gallery is no different. Here's my RSS feed. The RSS feed contains the entire post content and the small image preview. I've also styled the RSS feed, so it matches the website design.
Accessibility
I care a lot about web accessibility. I've tried my best to make sure anyone can use my website, including people with impairments or disabilities.
An important part is to provide descriptions for every image via an alt
attribute. As long as there's no AI to translate a picture and its essence into words (and I don't think we'll get there anytime soon), artists have to handle this themselves. I try to describe what a photo contains, but also the feeling it conveys. This has a great side effect, as it lets me think more about my photos. I must admit, I struggle with this as much as I do with photo titles, but I think this will become easier with more experience.
Apart from that, I've followed the usual path:
- Use my experience to ensure accessible implementation.
- Check the website with a keyboard and with a screen reader.
- Run Evaluatory to check if I've made any common mistake.
Pipeline
Here's my workflow pipeline for publishing a new photo, using 05.jpg
as an example file:
Most of those steps are automated.
Preparation
In the first step, I strip irrelevant photo metadata using ExifTool. I leave all the data that other photographers might be interested in, e.g., aperture, exposure and ISO:
exiftool -all= -tagsfromfile @ -AllDates -Make -Model -LensModel -Artist \
-FNumber -ISO -ExposureTime -ExposureProgram -ExposureMode \
-ExposureCompensation -FocalLength -WhiteBalance -Flash 05.jpg
Next, I use ImageMagick to create two files: a small thumbnail and a medium-size photo:
magick convert -resize x1000> 05.jpg 05-medium.jpg
magick convert -resize x375> 05.jpg 05-small.jpg
The entire pre-processing takes one click using XYplorer, my indispensable Windows file manager.
Metadata update
The next workflow step creates an 11tydata.json
file, which contains relevant Exif data and the blurhash.
I use exiftool
and jq
to create a temporary exif.json
file containing the Exif metadata from all photos:
exiftool -ext jpg -json -FileName -all -d %Y-%m-%d content/photo -r \
| jq 'map(select(.FileName | contains ("medium"))) | map(.+{"id": .FileName[0:4]}) | map(del(.SourceFile,.FileName))' \
> temp/exif.json
To calculate the blurhash, I use blurhash
and sharp
:
const sharp = require("sharp");
const { encode } = require("blurhash");
const encodeImageToBlurhash = (path) =>
new Promise((resolve, reject) => {
sharp(path)
.raw()
.ensureAlpha()
.resize(32, 32, { fit: "inside" })
.toBuffer((err, buffer, { width, height }) => {
if (err) return reject(err);
resolve(encode(new Uint8ClampedArray(buffer), width, height, 4, 4));
});
});
Both results are then normalized and piped into an 11tydata.json
file.
Content update
Finally, I need to handle the content. A script creates a template Markdown file that I then edit manually. While I don't always come up with an image title or description, I'll always provide an alternative text, as explained in the accessibility section. I include the photo location only if it's relevant.
Further steps
I'm happy to have a place to share my photos that I have full control over. It's also a nice way to see my progress as a photographer. While there are photography areas that I like more than others, I probably won't settle on a certain niche. I might introduce browsable categories someday if the number of photos becomes too overwhelming. I might also introduce filtering photos by camera or lens.
Check out my photos at photos.darekkay.com.