Let's Learn 11ty Part 9: Data Revisited

We are back again with more Eleventy. I said I would come back and explain and showcase some things when I understood them better.

That time has arrived!

To explain these concepts more clearly, I will not be using the learneleventy project we already have. I am going to make a new one with the sole purpose of elaborating the concepts we left over.

You don’t have to start a new project. You will however need to make some modifications to the project we were working on. I nevertheless suggest you do - to see things in isolation and then incorporate them later.

Don’t worry, I’ll explain what you have to do if you’re using the existing version.

For Those Starting A New Project

If you are starting afresh follow this series steps until Part 4.

For Those Using The Project We Have

If you are using the project we have, make these changes:

  • Delete everything in the blog folder - we’ll be getting the posts from somewhere else.

External Data

What we are going to do now is fetch data (posts) from elsewhere and render it on our site.

For that, we will use the Hashnode API because it is one of the more straightforward ones to interact with for our particular use case.

Before We Begin

To communicate with Hashnode, we will need to install a couple of packages, namely:

  • graphql
  • graphql-request
npm install graphql graphql-request

Bringing Posts Into Our Project

If you recall this:

Global data refers to files we place in the _data folder. In most instances we use global data for things we want site wide, for example the site title and description.

I must add an addendum to this. The other thing we use the _data folder for is to get data from external sources, like APIs.

To fetch our posts from Hashnode, we are going to create a file to fetch them in the _data

Let’s first look at what the response from Hashnode looks like:

hashnode response

On the left is the query we have to make, on the right is the data we get back.

With this knowledge, we can move on.

Let’s make a file in _data called posts.js and put this in it:

/* src/_data/posts.js */

const endpoint = https://api.hashnode.com/;

module.exports = async () => { const { GraphQLClient, gql } = require("graphql-request");

const client = new GraphQLClient(endpoint);

const query = gql { user(username: "Psypher1") { publication { posts { title coverImage brief slug dateAdded contentMarkdown } } } } ;

const posts = await client.request(query);

return posts.user.publication.posts; };

If you have a Hashnode account, feel free to replace Psypher1 - my username - with yours.

Next, we are going to go into our blog.njk file and make sure our for loop looks like this:


<!-- src/blog.njk -->
<h1 class="">Blog Home</h1>

<ul class=""> {% for post in posts %} <li > <a class="" href="/blog/{{ post.slug}}">{{ post.title }}</a> </li> {% endfor %} </ul>

Things to note:

  • Since we are fetching data from an API, we are no longer using collections.post. Instead, we are looping through the posts - which is the file we made

NOTE: Info/files in the _data folder can be accessed from anywhere on your site

blog home

With a few modification we now have a new list of posts fetched from and external source


Computed Data

As we are now able to display the fetched posts, we can now work rendering the individual posts.

Let’s make a new file called postLayout.njk file and have this in it:


<!-- src/_layouts/postLayout.njk -->

layout: base pagination: data: posts size: 1 alias: post permalink: "/blog/{{post.slug | slugify }}/" templateEngineOverride: njk,md eleventyComputed: title: "{{post.title | safe}}"

<div> <h1>{{post.title}}</h1>

{{post.contentMarkdown}} </div>

Let’s focus in on the frontmatter:

  • layout: like what we’ve been doing prior, we are setting it up such that this layout uses the base layout
  • pagination: This is the special sauce we’ve introduced to generate the individual post pages from data that we’ve provided.
    • data: This refers to the data we want to paginate through. In our case - posts.
    • size: This defines how much info we want on each generated page. Since we want one post per page, we’ve set it to 1.
    • alias: This is a different name you can give to the items (posts) you are paginating through. This can be anything you want
  • permalink: We looked a permalink in passing before. Here, we are making better use of it by specifying the resulting URL that will be created for each post.
  • templateEngineOverride: During my tests I found that in order to render the post content properly - raw markdown in this case - I had to set a template override telling Eleventy which file types we want rendered and in what order.
  • eleventyComputed

eleventyComputed

This falls under the Computed Data of Eleventy’s Data Cascade. At the end of it - the cascade - we may want to add additional data to our templates. That is where the eleventyComputed key comes in. We have used it to add title to our individual post’s meta tag - the same key we set for our pages when we began this journey.

NOTE: slugify is the improved slug filter that helps us create more readable URLs - from now on we use slugify


Let’s now move to our blog folder and in it make a post.njk file with this in it:


<!-- src/blog/post.njk -->

layout: postLayout

If we have done everything correctly, pages for our posts should have been generated and we can navigate to them:

single post

it doesn’t look pretty, but we can now read our posts on our Eleventy site


Some FAQ Stuff

Why don’t I have the parts as separate branches in the repository? The thought crossed my mind, but a couple of things stopped me from doing that:

  • I’m not that versed at managing multiple branches without killing everything.
  • With CloudCannon creating a branch for its workflow and the main one for the current flow, that’s what I’m comfortable working with on my own.

Conclusion

With some time, research and a lot of asked questions we now have a better understanding of Eleventy’s Data.

In this article we:

  • Switched from local markdown files to fetching them from an API
  • Listed the fetched posts on our site
  • Rendered the individual post on the site

As always:


Thank you for reading, let’s connect!

Thank you for visiting this little corner of mine. Let’s connect on Twitter, Polywork and LinkedIn


Originaly published on:

Dante Decodes