Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Learn how to reset database state with Cypress tasks
Most interesting apps are stateful. They save information for later in some kind of database. This can make them harder to test, since each test could change the state of the app, making subsequent tests unpredictable. We're going to learn how to write tests that work with our database.
Download the starter files
Change into the workshop
directory
Create a local DB by running: ./scripts/create_db
Populate the DB by running: ./scripts/populate_db
Start the server with npm run dev
and check it's working
There is a very minimal Cypress testing setup. You can start Cypress with npm test
—you should see the single example test from cypress/integration/test.js
.
It's easy to create tests that rely on each other. For example imagine we had an app for managing different types of dogs. Here are two example tests; one checks that creating a dog works, and one checks that deleting a dog works:
These tests work fine together, but if we ran the delete test on its own it would fail. It expects there to already be a dog called "Rover" in the database, but this only happens if the creation test runs first.
This is a bad idea since it makes tests brittle. If somebody comes along later and swaps the order of the tests (or removes a test) it could break things. Tests should always be self-contained and able to run on their own.
Cypress has a handy way to run some code before every test—the beforeEach
method. You can pass this a function and Cypress will run it before each test:
We can use this to reset our database before each new test runs.
Unfortunately there's a slight complication: all the code inside our test runs in the browser. The browser doesn't have access to our Node environment, so it can't talk to our database directly.
However Cypress provides a way to execute code in your Node environment: "tasks". These are special functions you can create and then call from inside tests. Tasks are defined inside cypress/plugins/index.js
:
We can then call this task in our tests like this:
Try creating this task now, and make sure you can call it from inside a test. You should see the log show up in your terminal, not the test browser (remember tasks are for running Node code outside the browser).
We already have a script that can reset our DB back to its initial state: ./scripts/populate_db
. We can re-use this within our Cypress task using Node's child_process
module. This allows us to run terminal commands from inside Node.
In this case we want to use the execFileSync
method to execute a given file:
Now you can finally write some tests without worrying about the DB.
Add a test verifying the /
route displays a list of users
Add a test verifying you can create a user from the /create-user
route
Add a test verifying you can delete a user from the /
route
Currently we're querying our database directly within our route handler functions. It's generally a good idea to separate out data access. Ideally our route handlers shouldn't have to do anything more than call a function like getUsers()
. That way we could e.g. swap from Postgres to a totally different DB without having to change our route handlers at all.
The nice thing about already having tests is refactoring becomes much safer. You'll find out quite quickly if you break something.
Let's extract the homepage's query. Create a new file workshop/database/model.js
. Since there aren't many we'll put all our SQL queries into this one file.
Create a function named getUsers
. Import your database pool object and use it to get all the users, just like before:
You can now import and use this function in routes/home.js
:
This should work, but we can improve it. Our handler is stuck dealing with database-specific details. I.e. it has to know about the Postgres result.rows
property. Ideally we should do this data-processing inside model.js
instead:
Move the rest of the DB queries into model.js
. Make sure all the tests keep passing!
Write a createUser
function to insert new users
Write a deleteUser
function to delete a user
Write a getPosts
function to select all the blog posts
Refactor your route handlers to use the new model functions
Learn how to create CSS layout primitives and compose them together to create complex designs.
Modern CSS has powerful tools for controlling where elements go on the page. We're going to learn how to create style "primitives" (single-purpose bits of CSS) to solve different layout requirements, then see how we can combine those primitives together to create more complex layouts.
Let's quickly review some of the ways we can control layout with CSS.
Flow layout is the default way elements behave. Block elements like div
, header
and p
take up the full-width of the page. Inline elements like span
, strong
and a
only take up as much horizontal space as they need, and can sit next to each other.
The viewport scrolls vertically by default, when there's too much content to fit on the screen. If the content is too wide to fit the browser will wrap elements onto the next line.
You can go a surprisingly long way without writing much layout CSS, since the defaults are pretty good.
Flexible box layout (usually called "flexbox") is an alternate layout context you can set using the display: flex
rule.
This allows a parent element to control how its children are laid out. By default it puts elements all on a single line (as if they were inline elements). Unlike inline elements they won't wrap when there's not enough room. You have to enable wrapping with the flex-wrap: wrap
rule.
Flexbox is usually used for single-direction layouts. I.e. a row or a column, but not both. It's also better for flexible layouts where you don't need exact control over where every element goes.
Grid layout is another layout context that lets a parent element specify rows and columns for its children to slot into. You set this using display: grid
.
Grid can be used to create very specific layouts using grid-template-columns
and grid-template-rows
to specify an exact layout grid. You can then place child elements into specific locations on the grid with grid-column
and grid-row
.
Grid is usually used for two-direction layouts. I.e. rows and columns. It works best when you have a specific grid in mind, but can be less flexible.
It's important for content not to get too wide. Otherwise text gets pretty hard to read as your eyes have to travel so far left-to-right.
So it's a common requirement to put content in a narrow horizontally centered column. For example the content on this very website is in a center column.
The best way to constrain width is with the max-width
property. This is better than just width
, as it allows content to shrink if the viewport is too small. E.g. if you set width: 60rem
but the viewport was only 40rem
wide the element would overflow by 20rem
.
.example-center { max-width: 30rem; } Box 1Constaining max-width
We can then use margin to control where the constrained column goes. Setting margin to auto
tells the browser to use as much of the leftover available space as possible. E.g. if we set margin-left: auto
it would push the element all the way to the right (since the left margin would take up all of the available space):
Box 1Left auto-margin example
To center an element we can balance this out with an equal margin-right: auto
. Now both margins will get half the available space, pushing the element to the middle.
Box 1Left and right auto-margin example
We're going to need control over how wide the Center allows content to get, otherwise it's not very re-usable. We can control this in a couple of ways.
First we could use a CSS variable for the max-width
:
This will default to 30rem
if no variable is set, but we can override it if needed:
Box 1Narrower Center example
This is very easy to use, but has a couple of disadvantages. First it allows any value to be used. This is flexible but will lead to inconsistency in our design. It's better to pick pre-determined "allowed widths" so your layout doesn't look random.
Second, CSS variables are inherited, which means nested Centers will use the --max-width
from their parent.
You might expect the second .center
here to be 30rem
wide, since that's the default. However it will inherit the --max-width: 60rem
from its parent, which is unexpected.
Instead of CSS variables we can define "modifier" classes that we apply to override the max-width rule:
Now we can add extra classes when we want different widths.
You're going to fix the layout of this page. Currently all the content is full-width and it's hard to read. Download the starter files using the command at the start of the workshop, then open challenge-1/index.html
in your editor.
Challenge 1 preview
The header content should be constrained to 60rem
wide, the first section to 40rem
wide, and the contact section to 20rem
wide.
Add the Center CSS you need to the style
tag at the top. Then add classes to the HTML, but don't change it in any other way. Here is the result you're aiming for:
Challenge 1 solution
The most important layout primitive is one to control the space between elements. For re-usability and simplicity it's a good idea not to apply spacing rules to individual elements. E.g. if you put margin-left
on a button you can only re-use it in places where left spacing makes sense.
It's better to use a parent element to apply spacing to its children. This is often called a "stack". There are lots of ways to implement this (e.g. using flexbox or grid), but for simplicity we're going to do it with margin.
Let's say we want 1rem
of space between each of these boxes:
Box 1Box 2Box 3Boxes with no space between them
We could add styles to our .box
class, but then we couldn't re-use those boxes in other places where margin-top
didn't work. Instead we can use the parent to add margin to its children:
.example-stack > * { margin-top: 1rem; } Box 1Box 2Box 3Boxes with space above all of them
This isn't quite right: we've got space above every child—we only want the space between the children. This means no space above the first child.
There are a few ways to achieve this. We could add a rule disabling the margin for the first child:
Or we could only apply the rule to elements that are not the first child:
Or we could use the adjacent sibling combinator to only apply the rule to elements that have a sibling before them:
.example-stack-owl > * + * { margin-top: 1rem; } Box 1Box 2Box 3Boxes with space between them
Our stack primitive is useful, but we're going to need different amounts of spacing to make a whole page. We can control this using multiple classes, just like with the Center.
This is nice because it allows us to choose a pre-set number of sizes, which keeps our layout consistent.
Now we can control the space more easily:
.example-stack-md > * + * { margin-top: 1rem; } .example-stack-xl > * + * { margin-top: 4rem; } Box 1Box 2Box 3Box 4Box 5Box 6Nested stacks with differing space between them
You're going to use the Stack to fix the layout of a web page. Open challenge-2/index.html
in your editor.
Challenge 2 preview
Currently there's no space between anything. There should be 2rem
of space between each section
. There should be 1rem
of space between the elements within each section. There should be 0.5rem
between each form field and its label.
Fix the layout by defining Stack CSS inside the style
tag, then only adding Stack classes to the HTML. Don't add or remove any elements or write any other CSS! You can create this whole layout using only Stacks.
Challenge 2 solution preview
Another very useful layout primitive is a "row". Web interfaces often need elements placed next to each other. For example a horizontal list of links in a navigation bar, or the "Confirm" and "Cancel" buttons in a dialogue popup.
Are you sure you'd like to delete everything? Delete CancelButtons sitting next to each other in a row
Flexbox is designed for one-dimensional layouts, so it is perfect here. Setting display: flex
on an element lets it control how its children are laid out. By default they will all be put in a single row.
.example-row { display: flex; resize: horizontal; overflow: hidden; border: 0.5rem solid; } .box { } These boxes are as big as their contentThese boxes are as big as their contentFlex container example
If you resize the container using the handle on the bottom right you'll see that this layout doesn't adapt. By default the flex children will shrink as much as their content allows, but they can't get smaller than the longest word inside them. Once the container gets narrower than this they stop shrinking and get cut off.
This means our layout isn't flexible enough to cope with different screen sizes. Generally when you put things in a row you want to make sure they can wrap when there's no more space.
These boxes are as big as their contentThese boxes are as big as their contentFlex wrap example
Resizing this example shows the right-most child wrapping onto a new line when there isn't enough space for it.
Note that we don't need to add media queries here. Those are great when you need really specific control over exactly how and when the layout should change. But this layout is intrinsically responsive. It flows to fit whatever container it is inside based on its content. This tends to be simpler and more robust than trying to figure out exactly what breakpoints to add in media queries.
Layouts usually require some space between each element. CSS has a handy property for controlling this for flexbox and grid containers: gap
. This is a shorthand for row-gap
and column-gap
, which allow you to control the vertical/horizontal spacing separately.
These boxes are as big as their contentThese boxes are as big as their contentFlex gap example
Note that the gap is maintained even when the children wrap. Using gap
for flexbox is quite new, but it is supported by all modern browsers. If you need to support older browsers you can approximate the same effect using margins, but it's more complex to make sure it handles wrapping.
Flexbox allows control over how children are aligned both horizontally and vertically. Most of the time you want things vertically centered, so that different height children line up. You can control vertical alignment with align-items
:
Small boxTall boxFlex vertical alignment example
You can allow this value to be customised the same way we did for the Stack's space above. Either a CSS variable or modifier classes:
You may also need to allow control of horizontal alignment using the justify-content
property. This lets the container push its children apart, or to either end of the container.
Box 1Box 2Flex horizontal alignment example
Open challenge-3/index.html
in your editor. You should see a page with a header containing a logo and a nav.
Challenge 3 preview
You need to make the header layout work correctly. The logo should be on the far left, with the nav on the far right, and all the links in a row, like this:
Challenge 3 solution
Again, only add Row CSS to the style tag and classes to the HTML. Don't add any new HTML elements.
Sometimes you need to create a grid of elements, like an image gallery. Every element should be the same size, and the grid should automatically put as many elements as it can in a row.
CSS grid is perfect for this. It lets us create a two-dimensional layout (with columns and rows), and keeps all the elements consistently sized (unlike flexbox).
We can make a grid and set a specific number of columns. We can also use gap
to space the columns out:
Here we're defining three columns that take up one fraction (1fr
) of the available space. So they'll all be the same size.
.example-grid { display: grid; grid-template-columns: 1fr 1fr 1fr; gap: 1rem; resize: horizontal; overflow: hidden; } Box 1Box 2Box 3Box 4Box 5Box 6Three-column grid example
The children automatically get slotted into new rows, but there are always three columns. This isn't very responsive: if you resize the example you'll see the boxes get squished.
The solution is a fancy CSS trick that tells the grid to automatically create as many columns as it can fit:
This rule will create as many equal-sized columns as it can, as long as they don't get smaller than 10rem
. As the viewport gets bigger it'll add columns; as the viewport gets smaller it'll remove them.
.example-grid-dynamic { display: grid; grid-template-columns: repeat(auto-fit, minmax(10rem, 1fr)); gap: 1rem; resize: horizontal; overflow: hidden; } Box 1Box 2Box 3Box 4Box 5Box 6Three-column grid example
Resize the example and you should see the grid automatically reflow to fit the available space.
For the final challenge you'll be recreating the Instagram Web profile layout—without writing any CSS at all.
Here's how it currently looks:
Challenge 4 preview
And here's what you're aiming for:
Challenge 4 solution
Open challenge-4/index.html
in your editor. You need to get as close to the final layout as you can by only adding classes to the HTML. No touching the CSS!
Practice using various methods to update the DOM.
It's important to get comfortable manipulating the Document Object Model (DOM) using JavaScript. This includes creating new elements, updating content, toggling classnames and removing elements.
Here is a quick overview of various DOM manipulation techniques. If you want to find out more about each one you can check their MDN articles.
You can access elements on the page with the document.querySelector
method. This takes any valid CSS selector (like "button"
or "#my-id > .my-class:first-child"
) and searches the DOM for the first match. It returns a DOM element represented as a JS object.
You can access multiple elements with the document.querySelectorAll
method. This works in the same way except it returns a NodeList of all matches.
A NodeList is similar to an array but missing most of the usual array methods (it only has .forEach
). If you need to use .map
/.filter
etc you can turn it into an array with Array.from(myList)
.
You can create a new DOM element with document.createElement
. This takes a tag string like "button"
and returns the new DOM object.
It's important to note that this object isn't actually on the page yet—it just lives in memory in your JavaScript. To get the element to show up you have to put it inside another element on the page.
You can do this using the parent.appendChild
or parent.append
methods. The main difference between these is that append
works for text and can take multiple items. E.g. myDiv.append(myButton, "some text", myParagraph)
.
Most element attributes are reflected as JavaScript properties on the corresponding DOM object. For example the id
attribute can be changed on a DOM object using dot-notation:
Some attributes are not accessible as object properties. This notably includes ARIA attributes (like aria-label
). To change these you must use myEl.setAttribute
and myEl.removeAttribute
.
This works fine for simple stuff, but for attributes that are lists of strings (like className
) it can be awkward. You often want multiple classnames set on an element, but this requires you to manually concatenate strings together.
There is a nicer way to manipulate lists like this: the DOMTokenList
methods. E.g.
myElement.classList.add("my-class")
myElement.classList.remove("my-class")
myElement.classList.toggle("my-class")
myElement.classList.contains("my-class")
You can change the text inside an element by setting the textContent
property. Be careful though—this will override all existing content, including other DOM elements inside.
You can also use the .append
method to add text inside an element. This will work even if there are already other elements inside.
You can directly add inline styles to an element by setting properties on the myEl.style
object. This can get awkward for setting lots of styles, so a simpler way is to add a classname using JS and write the corresponding styles in the CSS instead.
Download the starter files and open challenge/dom.js
. Your task is to complete as many of these functions as possible. Each should have a comment explaining what it should do.
You can check if each one is working by opening challenge/index.html
in your browser. There's a section for each part of the challenge. Don't forget to check the console if something isn't working!
Practice creating your own promises
You may have used promises provided by libraries or built-in functions before. For example:
A promise is an object with a .then
method. This method takes a callback function that it will call with the result when the promise is finished (or "resolved"). You can imagine a promise object looks something like this:
But how do you create your own promise objects?
You can create your own promise objects with new Promise()
. You have to pass in a function that defines when the promise will resolve or reject. This function is passed two arguments: resolve
and reject
. These are functions you call with the value you want to resolve/reject with.
For example:
You could use the above just like any other promise-returning function:
You're going to create a promisified version of setTimeout
, called wait
. It should take a number of millliseconds to wait as an argument, set a timeout for that long, then resolve the promise.
It should be usable like this:
You can run the tests to check if your solution works:
You're going to create your own promisified wrapper of Node's fs.readFile
method. It usually takes a callback to be run when it finishes its asynchronous task.
Implement the readFilePromise
function so that it returns a new promise. It should use fs.readFile
to read whatever file path is passed in, then resolve with the result. It should reject with any error that occurred. For example:
You can run the tests to check if your solution works:
Practice rendering DOM elements using three different techniques.
Download starter files
Run npx servor workshop
to start a dev server
We'll be using three different methods to render the same dynamic UI to compare them. The UI will include a static single element (the title), plus a list of dynamic elements rendered from an array.
There is an array of dog objects in workshop/dogs.js
. In each challenge you'll need to import that data and render the following UI:
with a list item for each dog in the array.
The HTML document contains a single container to render all the UI into: <div id="app"></div>
.
document.createElement
The standard way to create new DOM elements in JavaScript is the document.createElement
method. You pass in a string for the HTML element you want to create and it returns the new DOM node. You can then manipulate the node to add attributes and content.
Once you've created DOM nodes you have to append them to their parent (and eventually to a node that is actually on the page). The classic way to do this is element.appendChild(newNode)
. This puts newNode
inside the element
node. If element
is already on the page then newNode
is rendered.
This has a big drawback: you can only append one thing at a time. This can lead to inefficient rendering. Each time you append a new element to the page the browser has to re-render everything. It's better to get all your DOM nodes ready then append them to the page in one go.
There's a newer method with a nicer API: element.append
. This is supported by all browsers but IE11. It can take as many elements to append as you like, and it even supports strings to set text content.
This is powerful when combined with the spread operator, as it means you can append an array of elements in one go:
Open app.js
and import the dogs array
Use document.createElement
and append
to render:
a page title
an unordered list
a list item for every dog in the array
Put all these elements inside the <div id="app">
in the HTML
createElement
We can write our own function to make it simpler to create DOM elements. Ideally we'll be able to pass in a tag name, some properties and some children, and have all the document.createElement
stuff handled automatically. E.g.:
We'll create this in a new file create-element.js
, so we can re-use it in multiple places if we need to.
The ...
is the rest operator—it gathers any additional arguments into an array. Any arguments after the props
object will go into a single array named children
.
First we need to create a new element using the tag
argument:
then we need to append all the properties from the props
object onto the DOM element:
Finally we need to append all the children to the new element
. We already saw how append
combines with the spread operator to add a whole array of children at once:
Don't forget to return the new element! We now have a nice helper function that we can export to use in our other file.
Use your new createEl
function to refactor your previous solution. Does it simplify the code?
innerHTML
This method almost feels like cheating. If you set an element's innerHTML
property to a string the browser will render it. This makes it a quick way to render a chunk of DOM, especially combined with template literals:
There are a couple of downsides to this method. First innerHTML
is considered a security risk. If you ever insert user input into an HTML string (like above) you run the risk of XSS attacks (cross-site scripting). A user could insert <script src="steal-credit-cards.js"></script>
as the name
variable, and your code would render that to the page, causing it to immediately execute.
It can also potentially be slow, since every time you change a node's innerHTML
property the browser must completely scrap and recreate its entire DOM tree. If you (for example) keep appending to innerHTML
in a loop you'll cause a lot of unnecessary re-renders. Nowadays browsers are so fast this is less of a concern.
Use innerHTML
and template literals to create the same UI as before.
<template>
elementThe template element is a special HTML element designed for rendering dynamic UI with JavaScript. The template (and all its contents) don't appear on the page. It's like a reusable stamp: you have to use JS to make a copy of the template, fill in the blanks, then append the copy to the page.
This is useful because we don't have to dynamically create elements: we can use the ones already created inside the template.
Use the template element to create the same UI. You'll need to edit the HTML file too.
It's a little annoying that templates have to be defined in the HTML file. We're doing all our rendering within JavaScript, so it would be nice to keep all the templates there too.
We can work around this by combining all three of our rendering methods. We can create a new template element within our JS, set its content using innerHTML
, then clone that template whenever we need a copy. The template is never actually on the page, it just lives inside our JS.
This also avoid the problems with innerHTML
, since we won't be passing user input into it. Our only use of innerHTML
will be the initial static markup.
Remove your template elements from the HTML file and instead create them with JavaScript. Refactor your previous solution to use this technique.
All of these techniques are valid, and all have their place. It's good to understand the platform you're working with, even if you end up using a framework like React that handles lower-level DOM manipulation for you.
Learn how to use cookies to persist information across HTTP requests
Cookies are an important part of authentication. We're going to learn how they allow your server to "remember" information about previous requests.
HTTP is a "stateless" protocol. This means each new request to your server is totally independent of any other. There is no way by default for a request to contain information from previous requests. Unfortunately it's quite hard to build a website without being able to remember things. For example "what has this user added to their shopping cart?" and "has this user already logged in?".
Cookies were introduced in 1994 as a way for web browsers to store information on behalf of the server. The response to one request can contain a cookie; the browser will store this cookie and automatically include it on all future requests to the same domain.
A cookie is just a standard HTTP header. A cookie can be set by a server including the set-cookie
header in a response. Here's an example HTTP response:
That set-cookie
header tells the browser to store a cookie with a name of "userid"
and a value of "1234"
.
This cookie is then sent on all future requests to this domain via the cookie
request header. Here's an example HTTP request:
The server would receive this second request, read the cookie
header and know that this request was made by the same user as before (with a "userid"
of "1234"
).
Cookies also support extra attributes to customise their behaviour. These can be set after the cookie value itself, like this:
By default a cookie only lasts as long as the user is browsing. As soon as they close their tabs the cookie will be deleted by the browser. This is useful for certain features (like a shopping cart), but less useful for keeping a user logged in.
The server can specify an expiry time for the cookie. This tells the browser to keep it around (even if the user closes their tabs) until the time runs out. There are two ways to control this: Expires
and Max-Age
. Expires
lets you set a specific date it should expire on; Max-Age
lets you specify how many seconds from now the cookie should last.
Cookies often contain sensitive information. There are a few options that should be specified to make them more secure.
The HttpOnly
option stops client-side JavaScript from accessing cookies. This can prevent malicious JS code (e.g. from a browser extension) from reading your cookies (this is know as "Cross-site Scripting" or XSS).
The Same-Site
option stops the cookie from being sent on requests made from other domains. You probably want to set it to "Lax" (which is the default starting with Chrome v84). Otherwise there's a risk of other sites pretending to act on behalf of a logged in user (this is know as "Cross-site Request Forgery" or CSRF).
The Secure
option will ensure the cookie is only set for secure encrypted (https
) connections. You shouldn't use this in development (since your localhost
server doesn't use https
) but it's a very good idea in production.
Lets see how to set and read cookies using Node.
Download the starter files and cd
in
npm install
npm run dev
First lets set a cookie by adding a "set-cookie" header to a response manually. Add a new handler for the GET /example
route:
Visit http://localhost:3000/example. You should be redirected back to the homepage. Open dev tools and look at the "Application" tab. Click on "Cookies" in the sidebar and you should be able to see the cookie you just set.
You can read the cookie on the server by looking at the "cookie" header. Edit your home handler:
If you refresh the page now you should see "hello=this is my cookie" logged. If you delete the cookie using the Application tab of dev tools and refresh again the cookie log should be gone.
Working with cookies this way is quite awkward—everything is just a big string, and we'd have to manually parse any values we needed. Luckily Express comes with some built-in cookie methods.
Express' response
object has a cookie
method. It takes three arguments, the name, the value, and an optional object for all the cookie options. It handles creating the "set-cookie" header string automatically.
Update your /example
handler to use Express' cookie helper:
This should create the exact same cookie as before.
Since reading cookies isn't something every server needs Express doesn't come with it built-in. They provide an optional middleware you need to install and use.
This middleware works like the built-in body-parsing one. It grabs the "cookie" header, parses it into a nice object, then attaches it to the request
for you to use.
So now you can access all the cookies sent on this request at request.cookies
:
You should see an object like this logged:
Express also provides the response.clearCookie
method for removing cookies. It takes the name of the cookie to remove. When the browser receives this response it will delete the matching cookie. Add a new route to your server:
If you visit http://localhost:3000/remove in your browser you should be redirected back to the home page, but the cookie will be gone.
Cookies are useful for ensuring users don't have to keep verifying their identity on every request. Once a user has proved who they are (usually by entering a password only they know) it's important to remember that information.
There are two ways to use cookies for authenticating users. The first is often known as "stateless" auth. We can store all the information we need to know in the cookie itself. For example:
When the user first logs in we set a cookie containing the user's information. On subsequent requests our server can check for this cookie. If it is present we can assume the user has previously logged in, and so we allow them to see protected content.
Unfortunately this has a pretty serious security problem: we can't trust cookies sent to us. Since a cookie is just an HTTP header anybody could send a cookie that looks like anything. E.g. anyone can use curl
to send such a request:
It's also easy to edit cookie values in dev tools—a user could simply change their ID/username to another.
However there is a way we can trust cookies: we can sign them. In cryptography signing refers to using a mathematical operation based on a secret string to transform a value. This signature will always be the same assuming the same secret and the same input value. Without the secret it is impossible to reproduce the same signature.
If we sign our cookie we can validate that it has not been tampered with, since only our server knows the secret required to create a valid signature. Implementing this from scratch would be complex and easy to mess up—luckily the cookie-parser
middleware supports signed cookies.
You need to pass a random secret string to the cookieParser()
middleware function. Then you can specify signed: true
in the cookie options. Signed cookies are available at a different request key to normal cookies: request.signedCookies
.
Add a GET /login
route that sets a signed cookie containing some user information, then redirects to the home page
Add a GET /logout
route that removes the cookie, then redirects to the home page
Log the signed cookies in the home route
If you visit /login
you should see the cookie data you set logged in your terminal. In the browser's dev tools the cookie will have some extra random stuff attached to it. This is the signature.
If you edit the cookie in dev tools and then refresh you should instead see your server log false
for that cookie name. This is because the signature no longer matches, so the server does not trust the cookie.
If you visit /logout
the cookie should be removed from your browser.
Storing all the information we need inside the cookie like this is very convenient. However there are some downsides:
Cookies have a 4kb size limit, so you can't fit too much info in them.
The cookie is the "source of truth". This means the server cannot invalidate a cookie, it has to wait for the cookie to expire. The server cannot log users out—as long as their cookie is valid they can keep making requests.
The other way to keep users logged in is to keep track of that state on the server. The cookie just stores a unique ID. This ID refers to some data that lives on the server and stores all the user info.
For example once a user logs in you might set a cookie containing a session ID like this:
and then store the relevant user info using that sid
as a key:
Here we're just putting the session data in an object, which means it will get deleted whenever the server restarts. Ideally this information would be stored in a database so it persists.
On subsequent requests the server would read the session ID cookie, then use that to look up the user info from the sessions
object.
This allows the server to control the "session"—if it needs to log a user out it can simply delete that entry from the sessions
storage.
It's important that the session ID is a long random string, so that nobody can guess them. Here's a good way to generate a random 18 byte long string in Node:
Also although we aren't directly storing user info in the cookie we still need to sign it. Otherwise if someone did find a way to guess the session IDs they could edit their cookie.
Change your GET /login
route to set a signed session ID cookie. This cookie should just contain a long random string
Store the user data in a global sessions object
Change the home handler to read the session ID cookie, look the user info up in the global sessions object, then log it
Change the GET /logout
route to remove the cookie and delete the session from the global sessions object
Learn how to use npm modules like linters to make writing code easier
There are lots of useful modules on npm that can help us when we're working. Writing code can be difficult and error-prone, when you're learning and even when you're experienced. There are certain parts that are worth automating so you can free up your brain to worry about more interesting problems.
The JS ecosystem has mostly settled on using the Prettier library to format their code. This ensures that everyone's code looks the same, which makes it easy to jump into new projects. It's nice to automate things like indentation or quote usage, since these are not really things you want to waste brainpower on while you are coding.
There are a couple of ways to use Prettier to format your code. The simplest is to install an extension for your editor. Here is the VS Code extension. You can then configure your editor to "format on save", so your code is always formatted correctly.
You should also install Prettier as a dependency in your project. This ensures anyone who contributes will get the same version. Otherwise someone with an older version of the editor extension might end up formatting some code differently to the rest.
You want to install it as a development dependency, since it is not needed for your actual app code to work:
The Prettier extension will automatically use the locally installed version, so everyone should end up with consistent code.
If you like you can use the Prettier CLI to format code as well. This command will format all files in your current directory:
Prettier is explicitly designed to be very opinionated—the whole point is to make all code consistently formatted. However there are some things you can configure (like using single vs double quotes).
Create a .prettierrc.json
file and put any config options in here. If you're happy with the default settings you can just use an empty object:
This will ensure that everyone's editor extension uses the defaults instead of whatever they might have configured for their personal settings.
ESLint is the most popular JS linter. A linter is a program that looks at your code and tries to find mistakes. You can think of it like a spell/grammar checker for code.
This is incredibly useful as a linter can easily find problems that are quite difficult for a human to spot. For example here's a simple bug:
ESLint would underline console.log(fileinput)
in red, and tell you that fiileinput
is undefined. Linters can also be helpful when you're learning, as some of their warnings will be about stuff you didn't know yet.
Similar to Prettier you want to install both the ESLint editor extension and the command-line tool. Here is the VS Code extension.
Again you'll want to install it as a development dependency:
ESlint requires a bit more config, which you can automatically generate by running this command:
It will ask you some questions about your project, then generate a config file. Choose the JSON option—you should end up with a new file named .eslintrc.json
that looks like this:
This config "extends" the built-in recommended ruleset, which will catch common mistakes and problems. Later on if you need to you can add or disable specific rules in the "rules"
object here.
You may need to restart your editor after installing the ESlint extension. Afterwards you should start seeing red underlines for mistakes in your code. Try referencing an undefined variable like this:
You should see a red underline—hover the variable and you'll see a popup with a message like 'xyz' is not defined. eslint(no-undef)
. The last bit is the specific rule being broken. If you ever don't understand a problem you can google this rule and read about it on ESLint's website for more information.
It's useful to have a local dev server when working on projects. Whilst it's possible to just open an .html
file in your browser to view a webpage locally, lots of newer browser features won't work (for security reasons). A proper dev server also has nice features like "live reload" to auto-reload the page whenever you save changes to a file.
The "Live Server" VS Code extension is popular for this, as it makes it quick and easy to start a local server. However this isn't ideal for shared projects as you don't have a centrally configured way to run the site—you're relying on each contributor to bring their own server.
Browsersync is a nice tool for creating dev servers. You should install it as a development dependency:
You can then start a server for your local files:
This is annoying to type so you probably want to add an npm script to your package.json
:
Now you can just run npm run dev
to start the server.
Browsersync will watch all your files and auto-reload your browser tabs if you change them. It also synchronises browser state across all tabs/windows, so if you scroll or fill in a form in one tab it'll update all of them. This is handy for testing lots of viewport widths at once, for example.
It will also run the server on your local Wi-Fi network, so you can easily test your local work on a mobile device. When you start the server you'll see a log like: External: https://192....
. Visit this IP address on any device on the same Wi-Fi network to see your local site.
In JavaScript functions are treated like any other variable. This is sometimes referred to as “first-class functions”. The concept can be confusing, so let's look at some examples.
When you create a function in JS you are creating a normal variable:
This is still true (and perhaps more obvious) for arrow functions:
You can reference this variable the same way you would any other (using its name):
We can pass functions to other functions as arguments.
Write a function named logger
It should take one argument and log it to the console
Call logger
with the returnsOne
function as an argument
Answer ```js function logger(thing) { console.log(thing); } logger(returnsOne); // function returnsOne() ```
The main distinction between a function and other types of variable is that you can call a function. You call a function by putting parens (normal brackets) after it:
Calling a function will run the lines of code inside of it. We can either reference the called function directly or assign it to a named variable.
If the function returns nothing you'll get undefined
:
This is often a source of confusion when passing functions as arguments.
Add another call to logger
, but this time pass in returnsOne()
Why do we see a different value logged?
Answer ```js function logger(thing) { console.log(thing); } logger(returnsOne); // Logs the function itself: `function returnsOne()` logger(returnsOne()); // Logs the function's return value: `1` ```
Edit logger
to use typeof
to log the type of the value
Answer ```js function logger(thing) { console.log(typeof thing); } logger(returnsOne); // function logger(returnsOne()); // number ```
Another source of confusion is functions defined inline. This is a common pattern for passing functions as arguments to other functions (for example as event listeners):
Type the event listener code into your editor
Extract the inline function and assign it to a variable
Use the extracted function as your event listener
Answer ```js const handleClick = (event) => { console.log(event.clientX, event.clientY); }; window.addEventListener("click", (event) => handleClick(event)); // OR window.addEventListener("click", handleClick); // We don't need an extra arrow function if all it does is // forward arguments on to the function we actually care about ``` It's important to note that we don't want to _call_ our function when we pass it here. This won't work as we need to pass a function, not its return value: ```js const handleClick = (event) => { console.log(event.clientX, event.clientY); }; window.addEventListener("click", handleClick()); // this is equivalent to: // window.addEventListener("click", undefined); // since handleClick doesn't return anything ```
"Callback" is a scary word, but you've actually been using them the whole time. A callback is a function passed to another function as an argument. The name refers to what callbacks are usually used for: "calling you back" with a value when it's ready.
For example the addEventListener
above takes a function that it will call when the "click"
event happens. We're telling the browser "hey, call us back with the event info when that event happens".
Functions are effectively a way to delay execution of a block of code. Without them all our statements would run in order all in one go, and we'd never be able to wait for anything or react to user input.
Write a function one
that takes a callback as an argument
It should call the callback with 1
Call your one
function and pass in a callback that logs its argument
Answer ```js function one(callback) { callback(1); } one((x) => console.log(x)); // OR one(console.log); // the extra wrapper arrow fn isn't needed, since all it does // is forward its argument on to console.log (which is already a fn) ```
The callback above might feel a bit pointless: why not just have the one
function return 1
? Callbacks make more sense when dealing with asynchronous code. Sometimes we don't have a value to return straight away.
For example network requests and timeouts can take multiple seconds to complete. JavaScript doesn't wait for these—it keeps on going and executes the next statements in the script.
Our addEventListener
from above can't return the click event, since it hasn't happened yet. So instead we pass a callback that it will run when it has the event.
Write a function asyncDouble
that takes 2 arguments: a number and a callback
It should use setTimeout
to wait one second
Then it should call the callback argument with the number argument multiplied by 2
Call asyncDouble
with 10
and a callback that logs whatever it is passed. You should see 20
logged after 1 second.
Can you see why asyncDouble
can't just return the doubled value?
Answer ```js function asyncDouble(num, callback) { setTimeout(() => callback(num * 2), 1000); } asyncDouble(10, (x) => console.log(x)); // OR asyncDouble(10, console.log); // (after one second) logs `20` ```
Let's make some traffic lights.
Write a function light
that takes two arguments: a string and a callback
It should wait 1 second, log the string and then call its callback argument
Use light
to log each colour of a traffic light sequence, in order, followed by "finished"
e.g. "green"
, "amber"
, "red"
, "amber"
, "green"
, "finished"
Traffic light patterns are a bit more complex. The sequence should actually be "green"
, "amber"
, "red"
, "red"
and "amber"
(at the same time), "green"
. Without changing light
, create the new sequence.
Learn how to use Cypress to test your app in a real browser
Cypress is a tool for automatically running your tests in a real web browser. Lots of testing libraries run using Node in your terminal. This means they struggle to test real user interactions with the DOM (since Node doesn't have one).
You can test in the browser by manually including test functions in a script tag on the page. However this isn't ideal as you have to remember to remove them from your production deployment (or all your users can see your test results in the console).
Cypress solves this problem by letting you write separate test files, then injecting them into a real browser. Since it automatically controls a browser you can see your tests running in in real time.
First we need to create a new directory and initialise it so we can install modules:
Once you're inside your new directory and have generated a package.json
file you can install Cypress as a dev dependency:
Cypress can take a while to install the first time, so be patient. Once it's finished open your package.json
and edit the "test"
script to this command:
Run npm run test
in your terminal, and you should see the Cypress app start up. The window will show a bunch of example tests—don't worry about these for now.
The first time you run Cypress it automatically creates some files. You should see a cypress.json
file and a cypress/
directory full of examples. The JSON file is used to configure Cypress—you can leave it empty for now.
Now you're ready to write your first test.
By default Cypress looks in the cypress/integration/
directory for test files. It will run anything inside this folder. You can delete the auto-generated example/
folder, since we're going to start from scratch.
Create a new file called practice-tests.js
. You should see this file show up in the Cypress app under "Integration tests".
You can click the file name to run it with Cypress. This should open up a browser with a "test sidebar" on the left. Since we didn't actually write any tests yet it'll say "No tests found".
Let's add a simple test to the file:
Cypress uses a global it
function to define tests. This works just like the test
function we wrote in the intro to testing workshop—it just has a different name.
If you save your test file Cypress should automatically re-run the test. It will now find the test and show the result in the sidebar.
This is a fine way to write simple unit tests, but Cypress is much more powerful. Let's test a real web page.
Add a new test function:
We're using the global cy
object to run a "command". cy.visit
tells Cypress to load a URL in the browser and wait for the page to load.
You should see the test re-run in your Cypress browser. The right side should load Cypress' example app (called "Kitchen Sink").
Let's add a command to find an element on the page:
cy.contains
will search the text content of all elements until it finds a match. If you click this test in the sidebar you should see the "Test body" table. Hover the contains
command and the browser on the right will update to highlight the element it found.
If a Cypress command fails it will cause the test to fail. For example change your test to search for an element containing "zzzzzz"
. You should see an error message in the sidebar.
Now let's interact with this element in our test:
This will tell Cypress to click the link. You should see the test re-run, and the browser navigate to the "Querying" page.
Finally we can make an assertion about the new page to verify that we got to the right place:
This syntax is a little strange at first, but it's supposed to read like a sentence. The cy.url
method retrieves the current URL. The .should
method creates an assertion. Here we are checking that the URL includes the sub-string "/commands/querying".
That's it—we've created a full integration test with Cypress. It's important to note that we aren't limited to a single set of tasks in a test. We can keep adding more commands in here until we're satisfied.
You can find a full set of available commands at https://example.cypress.io/. Here's a quick example with a few new ones that will useful:
Here we're getting the sign up form, finding the email input within it, then typing an email into that input. Finally we get the form again and submit it.
cy.get
takes a CSS selector to find an element, just like querySelector
in the DOM. .find
works the same way, but it searches the children of an element. .type
simulates typing text via the keyboard. .submit
triggers a form submission.
You're going to use Cypress to build a server TDD-style. That means write a failing test for each feature first, then implement that feature to make the test pass.
Create a server.js
file
npm install express
npm install -D nodemon
(for auto-restarting the server)
Add a "dev" npm script of: nodemon server.js
Write your Express server in server.js
. Write your tests in cypress/integration/
.
/
page with a title of "Welcome to my site"
/
page has working links to /about
and /sign-up
/about
page with a title of "About this site"
/sign-up
page with a form containing email/password inputs
/welcome
page with a title of "Thanks for joining"
/sign-up
page redirects to /welcome
after form submission (don't worry about actually using the submitted data)
If you have spare time go back to the server-side forms workshop. Use Cypress to write tests for your solution to verify that it works correctly
This workshop is an introduction to using Git for version control; GitHub for hosting a codebase and deploying a website; and VS Code for writing and editing code, as well as version control.
Git is a a program for controlling versions of a project. It allows you to track changes you make to files over time.
To make an analogy, it's like having a save button on your project. At any time, you can save what you've written and mark that point in time. This means you can review versions of the code or go back to a previous version.
GitHub, although similar in name, is not directly related to Git. GitHub is a platform for storing code online.
Each project is stored in a repository - this is a folder where all the files are contained on GitHub.
GitHub also allows you to host a live version of your website via GitHub Pages. You can share the link with anyone and they can visit websites that you've created!
We'll describe repositories in terms of local and remote versions.
A local repository is a folder on your computer which you save files to. A remote repository is a folder online which your files can be duplicated to.
When working with Git, you'll need to establish a workflow for synchronising code between a local and remote repository.
For example, if I change files on my computer, I'd like to then update the live version of a website by pushing my files to the remote repository. Or, if I'd like to download a project and create my own version, I might clone a remote repository to my local machine.
Once you've established a connection between your local repository and a remote one, you'll usually make changes on your local machine.
You can then stage the changed files - in other words, tell Git that you'd like these to be tracked in your version history. Staging is used to specify which changes to track, and which not to. For example, you might be ready to add your HTML and CSS files, but have unresolved errors in JavaScript. You can choose to stage some or all of your files.
Then next step is to commit your staged files - marking a save point in your project of the progress so far. Commits should contain a commit message which describes the changes made.
Finally, you'll push your local changes to the remote repository.
A program you can download which lets you write and edit code on your computer. VS Code is one of the most popular text editors and offers a number of extensions which can be helpful to you as a developer.
The source control tab in VS Code allows you to use Git within your text editor. It's offers the ability to stage, commit and push changes between your local machine and a remote repository.
From here, you can continue to stage, commit and push your changes.
You should commit once you have completed a change or feature, not after writing certain amount of code, and don't wait until a project is complete. Getting into the habit of making small commits often will give you a good level of practice with Git. Regularly pushing your changes will ensure your codebase is backed up and version-controlled. Additonally, you'll have GitHub activity on your profile (green squares).
In this workshop you'll learn how to validate user input in the browser and present error messages accessibly.
Download starter files
Open workshop/index.html
in your browser
This is the form we'll be adding validation to
Client-side validation is important for a good user experience—you can quickly give the user feedback when they need to change a value they've entered. For example if passwords must be a certain length you can tell them immediately, rather than waiting for the form to submit to the server and receive an invalid response.
Our form has two inputs: one for an email address and one for a password. These are the requirements we need to validate:
Both values are present
The email value is a valid email address
The password contains at least one number, and is at least 8 characters long
Before we implement validation we need to make sure the user is aware of the requirements, by labelling the inputs. There's nothing more frustrating than trying to guess what you need to do to be able to submit a form.
If we don't list our password requirements users will have to guess what they are.
The simplest way to list requirements is in a <div>
following the label. This is fine for visual users but won't be linked to the input, which means assistive tech will ignore it.
Add a visual required indicator to both inputs.
Add instructions containing our password requirements
Associate the instructions with the input using aria-describedby
If you inspect the password input in Chrome's devtools you should be able to see the accessible name (from the label) and description (from the div) in the "Accessibility tab".
Now we need to tell the user when they enter invalid values. Browsers support lots of different types of validation.
The required
attribute will stop the user submitting the form if they haven't entered this value yet.
Browsers will validate certain input type
s to make sure the value looks correct. For example:
We can specify a regex the value must match using the pattern
attribute. For example this input will be invalid if it contains whitespace characters:
You can even style inputs based on their validity using CSS pseudo-classes like :invalid
, :valid
and :required
.
Ensure each input meets our validation requirements above. If you submit the form with invalid values the browser will automatically stop the submission and show a warning.
It's still useful to start with the HTML5 validation attributes, so that if our JS fails to load or breaks the user at least gets basic validation.
First we need to disable the native validation by setting the novalidate
attribute on the form element. This prevents the built-in errors from appearing.
Then we can listen for the form's submit
event and check whether any inputs are invalid using the form element's .checkValidity()
method.
This method returns true if all inputs are valid, otherwise it returns false. If any of the inputs are invalid we want to call event.preventDefault()
to stop the form from submitting. Don't worry about showing error messages for now.
Open workshop/index.js
Disable the native form validation
Listen for submit events and check whether all the inputs are valid
Prevent the form from submitting if any inputs are invalid
We've managed to stop the form submitting invalid values, but we need to provide feedback to the user so they can fix their mistakes.
First we need to actually mark the input as "invalid". The aria-invalid
attribute does this. Each input should have aria-invalid="false"
set at first, since the user hasn't typed anything yet. Then we need to know when the input becomes invalid, so we can update to aria-invalid="true"
.
We can listen for an input's invalid
event to run code when it fails validation. The browser will fire this event for all invalid inputs when you call the form element's checkValidity()
method. E.g.
The final step is showing a validation message depending on what type of validation error occurred. We can access the default browser message via the input.validationMessage
property. E.g. for a required
input this might be "Please fill out this field"
.
Loop through all the inputs
Mark each as valid
For each input listen for the invalid
event
Mark the input as invalid when this event fires
We need to actually tell the user what their mistake was. The simplest way to do this is to grab the built-in validation message from the browser. This will be available as the element.validationMessage
property. For example if the user typed "hello" into this input:
The JS would log something like "Please include an '@' in the email address". These message vary across browsers.
We need to put the message on the page so the user knows what they did wrong. The message should be associated with the correct input: we want it to be read out by a screen reader when the user focuses the input.
We can achieve this using aria-describedby
just like with our password requirements. This can take multiple IDs for multiple descriptions (the order of the IDs determines the order they will be read out).
Whenever this input is focused a screen reader will read out the label first, then the type of input, then any ARIA descriptions.
Create divs to contain the error messages
Set attributes on the inputs and divs so they are linked together
Put the validation messages inside the divs so the user can read them
Right now it's a little confusing for the user as the input stays marked invalid even when they type something new. We should mark each input as valid and remove the error message when the user inputs something.
Add an event listener for input
events
Mark the input valid and remove the error message
We have a functional, accessible solution now, but it could be improved with some styling. It's common to style validation messages with a "danger" colour like red, and sometimes to mark invalid inputs with a different coloured border. You could also use warning icons to make errors even more obvious.
Style the error messages
Style invalid inputs
Add any other styles you like to make it look good
The default browser messages could be better. They don't contain specific, actionable feedback. E.g. if a pattern
doesn't match the user sees "Please match the requested format". It would be more useful to show "Please enter at least one number".
We need to know what type of error occurred to show the right custom message. The input element's .validity
property contains this information.
This interface has properties for every kind of error. For example an empty required
input will show:
We can write an if
/else
statement to check whether each property we're interested in is true. If it is we can show a custom error on the page:
Edit your invalid
handler to check the validity
interface
Show custom error messages based on the input's ID and what validation failed.
Learn how to use forms to send requests and submit user data.
Forms are the building blocks of interactivity on the web. They allow websites to send requests to servers without requiring any client-side JavaScript.
You create a form with the <form>
element. This is a container for all the different types of inputs your users will interact with.
Forms can contain any number of elements that allow user input (e.g. <input>
). Users can enter values into these fields, then submit the form. The browser will then make a request to a new page that you specify, sending all the data from the form.
The humble <input>
element can be used to render many different types of input.
There are many different types of input to cover all the various kinds of data you might want to collect from a user. You can see the full list and read more about each .
<input type="text">
Basic single line text input.
<textarea></textarea>
Allows multiline text input.
<input type="email">
Shows a special keyboard with the @
symbol on some phones. Also validates that the user entered an email on submission.
<input type="checkbox">
Used for turning specific values on or off.
<input type="radio">
Used for selecting one value out of a group of options.
Forms can also contain button elements. By default clicking them will submit the form. It's generally a good idea to explicitly add type="submit"
to your submit buttons (even though it's the default). That way it's obvious to other developers what the button does.
action
attributeWhen submitted a form will send a request to the URL in its action
attribute. This can be a relative URL within the same site (e.g. /submit
) or an external URL to another site (e.g. https://example.com/submit
).
This request is a standard GET
request, just like when you type a URL manually (or click a link). When the browser receives a response to the request it will render that as a new page (just like when you click a link).
All inputs with a name
attribute within your form will be submitted. By default they'll be sent as the "search" part of the URL (often called the "querystring". It's the bit after the "?"
at the end).
It will be structured like this:
Each field is represented with its name and value separated by an ampersand (&
).
Some input types submit differently. For example a checkbox can either checked or not. If it is unchecked it won't be sent at all. If it is checked but has no value
attribute then it will be sent as name=on
. If it has a value
attribute that will be used instead. E.g.
Radio buttons are designed to select one value out of a set of options. A group of radios should use the same name
to link them together. They should each have unique value
attributes. The value
of the selected radio will be submitted. E.g.
Open workshop/index.html
in your editor
Add a form to the page containing a text input
This should submit a name
value
Don't forget inputs need labels!
The form should submit to "https://learn-forms.netlify.com/submit/part1"
The response will tell you whether you successfully submitted a name
Change your form to submit to "https://learn-forms.netlify.com/submit/part2"
Add fields for:
an email address
a telephone number
a textarea for a message
a marketing-consent checkbox
The data submitted should look something like this:
Change your form to submit to "https://learn-forms.netlify.com/submit/part3"
Add a group of three radios that allow the user to choose their preferred contact method (email, phone, post)
The extra data submitted should look like this (if email was selected):
The Cypress app showing a single test
My browser running our first test
A failed test in the sidebar
Make a new folder on your computer
Open this in VS Code
Navigate to the source control tab and click 'Initialise repository'
Make your changes - here we've created files for HTML, CSS and JavaScript and linked them all together. Back on the source control tab, you can view which files have unstaged changes.
Click the add button to stage your files, this will move them to 'Staged Changes'
Write a meaningful commit message, which summarises the work you've completed. Then, click the commit button (represented with a tick) to commit those files to your Git history.
VS Code will prompt you to publish the branch once you've staged all changes. The first time you click this, you'll need to connect to your GitHub account.
Finally, publish to a new public repository
And that's all! You'll now have published the files to GitHub. If you visit your profile on you should now see your repository. You'll need to make your repositories public so that we can view them when you make your application. This does mean they'll be available to anyone who uses the web - so be careful what personal information you share.
Each of your projects should be live on the web. GitHub offers a free, and easy way to get your site deployed online. Have a read through their . Please note, GitHub mention using Jekyll to create default themes for projects, we ask that you do not use any custom themes, and configure all your styling using CSS.
Users generally expect required fields to be (*
). We can add one inside the <label>
. However this will cause screen readers to read out the label as "email star", which is not correct. We should wrap the asterisk in an element with aria-hidden="true"
to ensure it is ignored by assistive technology.
We need to use the attribute on the input. This takes the IDs of other elements that provide additional info. It allows us to link the div to the input so screen readers read out the extra info as if it were part of the label.
Here's a .
Built-in validation is very simple to implement, and it works without JavaScript. However it has a few downsides. We cannot style the error message bubbles that pop up. The messages are . Required inputs are marked invalid as soon as the page loads (since they are empty). We can definitely improve this user experience by enhancing our validation with JS.
Here's a list of .
(built-in HTML validation)
(the limitations of HTML/CSS-only validation)
(explains the JS/ARIA stuff we need for accessible validation)
(summarises how aria-describedby
works)
You can read more
Learn how to split your code up into separate modules using built-in JS features.
ES Modules are a way to isolate your JS files. This is similar to Node's require
syntax, but build-in to the JS language (rather than a proprietary Node feature).
Modern browsers support modules loaded from a script tag with type="module"
. This tells the browser that this JS code may load additional code from other files.
Generally (for wider browser support) apps use a tool called a "bundler" to parse all the imports and "bundle" them into a single file that older browsers will understand. For ease of learning we won't be using a bundler yet.
Modules help with JavaScript's global variable problem: usually variables defined in one file are able to be accessed (and changed) in any other. This can be confusing and dangerous, since you can't tell where things are defined/changed.
With modules variables defined in separate files are not accessible in another. This means you can have 100 variables named x
, as long as they're all in different files.
The only way to use a value from another module is to explicitly export it from one file, then import it where you want to use it.
Files can have two types of exports: "default" and "named". A file can only have one default export, but can have many named exports. This is conceptually similar to the difference between module.exports = myFunction
and module.exports = { myFunction, otherFunction }
in Node.
This is how you create a default export:
And this is how you create a named export:
You can only default export a single thing, but you can have as many named exports as you like:
You don't have to export things at the end of the file. You can do it inline:
There are also two kinds of imports: default and named. The way you import a value must match the way you exported it. A default-exported variable must be default-imported (and vice versa).
This is how you import something that was default-exported:
Named-exports must be imported with curly brackets:
You can import as many named-exports as you like on the same line:
Import paths must be valid URIs. They can be local to your machine ("./maths.js"
) or on the web ("https://cdn.pika.dev/lower-case@^2.0.1"
). This means you must include the file extension for local files. Node lets you ignore this, but it is mandatory in the browser.
To get started download the starter files, cd
into the directory, then run npx servor workshop
to start a dev server. ES Modules require a server to work, so you can't just open the index.html
file directly.
Split up the JS code in workshop/index.js
into 3 files:
math.js
containing the operator functions
calculate.js
containing the calculate
function
index.js
containing the DOM event handlers
Change each file so they export and import everything they need to
Don't forget browsers only support imports inside a script tag with type="module"
Learn how to create your own middleware for Express servers
Learn how to write your own Express middleware to do logging and authentication.
Express is built around middleware. Middleware are functions that receive a request, do something with it, then either pass the request on to the next middleware or send a response (ending the chain).
Technically all route handlers are middleware, since they fit the above definition. However middleware usually transform the request in some way and don't actually send a response.
For example the built in express.urlencoded
middleware grabs the HTTP request body, turns it into an object, then attaches it to the request object. This allows subsequent handlers to easily access it at request.body
. The 3rd party cookie-parser
middleware does the same for cookies. We're going to learn how to create our own middleware functions.
Download the starter files and cd
in
Run npm install
to install all the dependencies
Run npm run dev
to start the development server
Visit http://localhost:3000 to see the workshop app. You can "log in" by entering an email, which will be saved as a cookie so the server can identify you.
It would be useful if our server logged each incoming request to our terminal. That way we can see a log of what's happening as we use our server locally.
Usually we match our handlers to specific routes (e.g. server.get("/about", ...)
. However we can run a handler for every route by using server.use
:
This will log the method and URL for every request the server receives (e.g. GET /
or POST /submit
). Unfortunately that's all it will do, as this handler never tells the next handler in the chain to run. This will cause all requests to time out, since the server never sends a response using response.send
.
We can fix this with the third argument all handlers receive: next
. This is a function you call inside a handler when you want Express to move on to the next one.
This tells Express to run the logger handler before every request, then move on to whatever handler is queued for that route. E.g. if the user requests the home page (GET /
) this handler will run, log the method/URL, then pass on to the next handler that matches GET /
, which will send an HTML response.
Note: we are just storing all the session info about the user in an object in-memory. In a real app you'd want this to live in a persistent store like a database.
Currently we are accessing the user cookie in three handlers (GET /
, GET /profile
and GET /profile/settings
). We have to grab the session ID from the signed cookies, then look the session info up in the sessions
object. This ends up being quite a lot of code repeated whenever we want to find out info about which user is currently logged in. We can create a middleware to handle this repeated task.
We don't know which routes will want to access the logged in user value so we'll set this middleware on the whole app using server.use
. We'll mimic the other middleware we're using and add the user
value to the req
object. This lets us pass values down through the request chain to later handlers.
Challenge 1.1
Create a new middleware that runs before every request.
It should read the sid
cookie and find the session info in the sessions
object
Then create a "session" property on the request object containing that info
Finally call the next
function to tell Express to move on to the next handler.
Change each handler that currently gets the session cookie to instead grab the info from req.session
.
Currently our GET /profile
route is broken. If the user isn't logged in we get an error trying to access user.email
(since req.session
is undefined). It would be better to show a "Please log in" page for unauthenticated users.
Challenge 1.2
Amend the GET /profile
handler to check whether there is a session.
If not send a 401
HTML response with an error message in the h1
and a link to the /log-in
page.
Now you should see the "please log in" page if you visit /profile
when you aren't logged in. However the GET /profile/settings
route has the same problem.
We could copy paste the above code, but it would be better to avoid the duplication and move this logic into a middleware that makes sure users are logged in.
Challenge 1.3
Create a new middleware function named checkAuth
that takes req
, res
and next
as arguments.
If there is no req.session
respond with the 401
HTML.
If there is a req.session
call next
to move on to the next handler.
Add this middleware in front of the handler for any route we want to protect. We don't want this middleware running on all routes, since some of them are public.
Hint: you can set multiple middleware/handlers for a route by passing multiple arguments.
Learn how functions work, and how to manage asynchronous JavaScript code using callbacks.
In JavaScript functions are treated like any other variable. This concept is sometimes referred to as “first-class functions”. It lets us use functions in some interesting ways, and helps us manage "asynchronous" code.
When you create a function in JS you are creating a normal variable:
This code creates a new variable named returnsOne
.
This is still true (and maybe more obvious) for arrow functions:
This code also creates a new variable named returnsOne
. Notice how this is similar to defining a different type of variable:
You can reference this variable the same way you would any other (by using its name):
Since functions are normal variables you can even pass them as arguments to other functions.
Lets try passing a function as an argument to another function. Since we're just playing around to see what happens we can write this code in the console on this page. Open up the console and try this:
Write a function named logger
It should take one argument, then log that argument
Call logger
with the returnsOne
function as an argument
What does the browser print?
The main distinction between a function and other types of variable is that you can call a function. You call a function by putting parentheses (round brackets) after it:
Calling a function will run the lines of code inside of it. This is useful for two reasons:
Functions let us reuse code without copy/pasting it.
Functions let us delay running code until we're ready.
Functions need to be able to talk to each other. This is how you create a more complex program. You compose together a bunch of functions, passing the output of one into another.
The return
keyword lets us control what value we get after calling a function. Our returnsOne
function always returns a value of 1
. When you call a function the lines of code inside are run, and the function spits out its return value in place. You can then use this returned value however you like.
You can save it as a new variable:
Here you can imagine that returnsOne()
replaces itself with its return value. It's the same as if we'd written const answer = 1
directly.
You can also use the called function directly without an intermediary variable:
Here the same thing happens. returnsOne()
replaces itself with its return value. It's the same as if we'd written console.log(1)
directly.
If the function doesn't have a return
statement you'll get undefined
:
Calling or not calling a function is often a source of confusion when passing functions as arguments to other functions.
Open your console and recreate your logger
function from above
Call logger
, but this time pass in returnsOne()
(don't forget the parentheses)
Why do we see a different value logged than before?
Edit logger
to log the type of the value using the typeof
operator
You can also define functions inline: i.e. directly as you're using them. This is a common pattern for passing functions as arguments to other functions. For example we could re-write our logger
example:
Here we're defining a new function inline at the same time that we're passing it to logger
. This is a little hard to read, which is why most developers use arrow functions for inline functions like this:
This has the same result as before, when we defined a separate returnsOne
variable and passed it by name. The main difference here is we can't re-use the function, since it only exists as an argument to logger
.
Inline functions are often used for event listeners in the DOM. For example this code will log wherever the user clicks on the window:
Open your console and enter the event listener above
Extract the inline function and assign it to a named variable
Pass your extracted function variable to addEventListener
instead
The event listener should work the same whether your function is inlined or defined as a separate variable.
"Callback" is a scary word, but you've actually been using them the whole time. A callback is a function passed to another function as an argument. The name refers to what callbacks are usually used for: "calling you back" with a value when it's ready.
For example the addEventListener
above takes a function that it will call when the "click"
event happens. We're telling the browser "hey, call us back with the event info when that event happens".
Functions are a way to delay a block of code. Without them all our statements would run in order all in one go, and we'd never be able to wait for anything or react to user input.
Write a function named one
that takes a function as a parameter
It should call that function with 1
Call your one
function and pass in a function that logs its argument
The callback above might feel a bit convoluted: why pass in a callback to access a variable from inside when we could make the one
function return 1
directly?
Callbacks make more sense when dealing with asynchronous code. Sometimes we don't have a value to return straight away.
JavaScript is a "single-threaded" language. This means things generally happen one at a time, in the order you wrote the code.
When something needs to happen out of this order, we call it "asynchronous" ("async" for short). JavaScript handles this using a "queue". Anything async gets pushed out of the main running order and into the queue. Once JS finishes what it was doing it moves on to the first thing in the queue.
setTimeout
is a built-in function that lets you run some code after a specified wait time.
It's intuitive that the above example logs 2
last, because JS has to wait a whole second before running the function passed to setTimeout
.
What's less intuitive is that the order is the same even with a timeout of 0ms.
This is because setTimeout
always gets pushed to the back of the queue—the specified wait time just tells JS the minimum time that has to pass before that code is allowed to run.
Callbacks let us access values that may not be ready yet. Imagine ordering food in a takeaway. If you just get a pre-packaged sandwich they might be able to hand it to you straight away. This is "synchronous"—they can give you what you need then move on to the next person in the queue.
However if your food needs to be cooked you might give them your phone number, so they can text you when it's ready. This is "asynchronous"—they can move on to the next person in the queue, and "call you back" to collect your food later.
Our addEventListener
example from above can't return the click event, since it hasn't happened yet. The browser won't know where the user clicked until the click happens. So instead we pass a callback that the browser will run for us when the user clicks somewhere. It calls this callback with the event object containing the info we need.
Write a function asyncDouble
that takes 2 arguments: a number and a callback
It should use setTimeout
to wait one second
Then it should call the callback argument with the number argument multiplied by 2
Call asyncDouble
with 10
and a callback that logs whatever it is passed. You should see 20
logged after 1 second.
Can you see why asyncDouble
can't just return the doubled value?
Let's use callbacks to make some traffic lights. Download the starter files using the command at the top of this workshop. Open challenge/index.html
in your editor.
Inside the script
tag write a function light
that takes two parameters: a string and a callback
It should wait 1 second, log the string and then call its callback parameter
Use light
to log each colour of a traffic light sequence, in order, followed by "finished"
e.g. It should log:
with a 1 second pause before each colour
Now we have a nested function that logs our traffic lights. However, highly nested functions can be hard to read. We can mitigate this by using a loop. Open challenge/stretch/index.html
in your editor.
Define an array, light_sequence
containing each colour of the light sequence, ending with "finished"
Modify your light
function to include a third delay parameter and pass it to your setTimeout
function
Use a method such as forEach
on light_sequence
to pass each element of the array to the light
function
Start by calling the light
function with a 1
second pause. What do you notice?
Try and replicate the behaviour of the previous function exactly. How can you ensure the delay is kept between the log of each colour?
Learn how different users browse the web to make your sites accessible to everyone.
It's important to remember that the web is for everybody. You should strive to build interfaces that are accessible to as many people as possible. This means specifically including users who often get left out: those with disabilities.
As developers we have both a moral and legal imperative to make sure our applications are accessible. A UI that can only be used by a sighted person with a mouse should be considered broken, just like a UI that isn't usable with a mouse would be considered broken.
This workshop will cover some of the ways disabled users browse the web, how to design/develop to meet those needs, and how to test your code to make sure that it does.
Don't make assumptions about how people will use your site
Strive to make your UI usable by everyone
Get out of your comfort zone when testing your UI
It's easy for developers to forget that people using their apps aren't necessarily like them. This can include both permanent disabilities, temporary disabilities, and even unavoidable technical limitations.
Here are some examples:
A colorblind user cannot perceive the difference between certain colours. If you use only colour to indicate what a button does they won't be able to use it.
A blind user cannot see your UI, and may use screen reader software to have the elements on the page read out loud by their computer. If you only communicate important information visually (e.g. using images) they won't hear it.
A user with a broken arm in a sling cannot use their mouse. If your UI doesn't allow them to navigate with their keyboard they won't be able to use it.
A low-income user may not have the latest iPhone. If your app only works properly on fast, new, expensive devices you are excluding them.
This document will mostly focus on keyboard and screen reader access, since those tend to be the predominant problems for web developers to solve. However that does not mean other disabilities are not important to consider when designing and developing interfaces.
If you're a non-disabled web developer it's possible you aren't aware of or haven't tried navigating the web using anything but your mouse. It's important to get a sense of how other people will be attempting to use the sites you build.
Many users cannot use a mouse, or find it easier to use their keyboard to navigate. This also applies to some visually-impaired users who will struggle to use a mouse if they cannot see the cursor.
You can scroll a web page up or down by pressing the up ↑ or down ↓ arrow keys. You can jump a whole "page" (equal to the viewport height) down with the spacebar.
You can also jump a page up or down using the Page Up or Page Down keys. On Mac keyboards without these keys you can use ⎇ option + ↑ or ↓.
You can scroll right to the top or bottom of a page using the Home or End keys. On Mac keyboards without these keys you can use ⌘ command + ↑ or ↓.
The most important key is "tab" ⇥. This will move your "focus" to the next interactive element on the page. For example links, buttons or inputs. This allows you to quickly jump between these elements, e.g. to fill out a form.
By default a "focused" element has an outline around it (in Chrome this is like a blue glow). You can "click" a focused button or link with the "return" ⮐ or spacebar keys. You can submit a form by pressing return whilst focusing an input.
Try using only your keyboard to navigate while you're reading the rest of this page.
Navigating via keyboard still requires you to know what content is on the page. Visually-impaired and blind people use "screen reader" software to read whatever is on the page out loud.
Most operating systems have a built-in screen reader now. macOS has VoiceOver, Windows 10 has Narrator and Linux has Orca.
Expand the relevant section below to get instructions for your operating system. Once you've got your screen reader working try to use it to navigate this page. It takes a bit of practice to get used to.
VoiceOver (macOS) You can activate Voiceover by pressing ⌘ command + F5 (or opening System Preferences and navigating to Accessibility > VoiceOver > Enable VoiceOver). By default VoiceOver shortcuts use the ∧ control and ⌥ option keys as a modifier. This is referred to as the "VO key" in [the docs](https://www.apple.com/voiceover/info/guide/_1121.html). Useful commands: - control: stop VoiceOver speaking at any time. - control + option + U: open the "Web rotor". Press ← or → to view all the headings/links/form controls etc on a page. - control + option + ← or →: move cursor to previous/next item. - Read more about [the basics of navigating with VO](https://www.apple.com/voiceover/info/guide/_1124.html)
Narrator (Windows 10) You can activate Narrator by pressing the Windows logo key + control + ⮐. By default Narrator keyboard shortcuts use caps lock ⇪ as the modifier key. This is labelled the "Narrator" key in [their docs](https://support.microsoft.com/en-us/windows/complete-guide-to-narrator-e4397a0d-ef4f-b386-d8ae-c172f109bdb1). Useful commands: - control: stop Narrator speaking at any time. - ⇪ + S: read a summary of a webpage, including links and headings - ⇪ + ↓: start reading the document from the beginning - There are [lots of different ways to read text](https://support.microsoft.com/en-us/windows/chapter-4-reading-text-8054c6cd-dccf-5070-e405-953f036e4a15) so try exploring them.
Orca (Linux) Depending on which Linux distro you're using you may need to install Orca: ```bash sudo apt install orca ``` You should be able to run the `orca` command in your terminal to start it. By default Orca keyboard shortcuts use caps lock ⇪ as the modifier key on laptops. Read more in [their docs](https://help.gnome.org/users/orca/stable/howto_keyboard_layout.html.en). Useful commands: - ⇪ + S: stop Orca speaking. - ⇪ + ;: read the entire document from the beginning - ← or →: read previous/next character - alt + shift + H: show a list of headings - H and shift + H: read next/previous heading - There are [lots of other ways to navigate](https://techblog.wikimedia.org/2020/07/02/an-orca-screen-reader-tutorial/) so try exploring them.
It's important to bear in mind that people don't fall neatly into separate categories. For example lots of keyboard users still use their mouse too. Lots of keyboard users can see the page fine. 85% of blind people have some degree of light perception, and so they may still use visual cues, or their mouse.
Developers sometimes try to detect what "type" of user is on the page, so they can enable/disable certain things (e.g. only show focus outlines for keyboard users). This is almost always a bad idea—you cannot detect accurately, and even if you could it's better to provide as similar an experience as possible to all users.
Web accessibility (often abbreviated to "a11y") is governed by the Web Content Accessibility Guidelines (WCAG). This is a shared standard that includes different criteria you can check your site against.
WCAG is quite long and complex, so a quick way to test a site is to use The A11y Project's Checklist. This is a list of simple things you should do on every site you build.
The A11y Project Checklist
You can also use automated testing tools to catch some types of problems. Chrome comes with a "Lighthouse" tab in the Developer Tools. This can run different types of tests on a page, including "Accessibility". It will inform you of obvious failures like low colour contrast or missing image alt text. However it cannot catch more complex problems, like a custom component that cannot be controlled with the keyboard.
Most importantly you should manually test—use the page in different ways and see if you get stuck. Try to fill out a form using only your keyboard. Turn your screen reader on and see if critical information gets left out. This will help you catch broken interactions that automated tools cannot.
You're going to be identifying and fixing a11y problems on this example page. The page contains 11 failures; try to find them all.
Re-read The A11y Project's Checklist before you start so you can identify obvious WCAG violations.
Don't forget to use Chrome's Lighthouse tab to find easy problems.
You'll have to manually test with your keyboard and screen reader to find some of the issues.
Learn how to use Node and Express to create HTTP servers
Node is often used to create HTTP servers for the web. It's a bit fiddly to do this with just the built-in modules, so we're going to use the Express library to help create our server.
HyperText Transfer Protocol (HTTP) is a way for computers to exchange messages over the internet. The "client" computer will send a "request" (often via a web browser). E.g. if you visit https://google.com your browser sends a request like this:
A "server" computer receives this request and sends a "response". E.g. Google's server would send a response like this:
We're going to learn how to use Node to create an HTTP server that can respond to requests.
Create a new directory
Move into that directory
Initialise the project to create a package.json
Install the Express library
Open your editor and create a server.js
file
Follow along with each example in your own editor.
We can create a new server object using the express
module:
Our server currently does nothing. We need to add a "route". This is a function that will be run whenever the server receives a request to a specific path.
The server
object has methods representing all the HTTP verbs (GET
, POST
etc). These methods take two arguments: the path to match and a handler function.
Here we tell the server to call our function for any HTTP GET requests to our home path.
The handler function will be passed two arguments: an object representing the incoming request, and an object representing the response that will eventually be sent.
We can use the send
method of the response object to tell Express to send the response. Whatever argument we pass will be sent as the response body.
Our Node program has a functioning server, but that server isn't currently listening for requests. Servers need to connect to the internet and listen for incoming HTTP requests on via a "port".
A "port" is an entry/exit point on a computer to allow network connections (like an airport allows people in/out of a country). HTTP uses port 80 by default (and HTTPS uses 443), so you don't normally see them in URLs on the web. E.g. when you visit https://google.com
you are really going to https://google.com:443
.
When you're running a server locally in development it's common to use a random number like 3000 or 8080. You can access a port by adding it to a URL like this: http://localhost:3000
.
We can tell our server to listen on a port like this:
We use the listen
method of the server object. This takes the port number to listen on, and an optional callback to run when it starts listening. This callback is a good place to log something so you know the server has started.
Now we can run the program in our terminal:
The server will start and you should see "Server listening on http://localhost:3000" logged.
Important: The Node process will continue running until you tell it to stop by typing control + c in your terminal. Every time you change your code you must stop the old process and start a new one by running node server.js
again.
Open http://localhost:3000 in your browser. This will send a GET
request to your server. You should see the "hello" response on the page. It's helpful to open the network tab of the dev tools so you can see all the details of the request and response.
HTTP responses need a few different things:
A status code (e.g. 200
for success or 404
for not found)
Headers to provide info about the response
A body (the response data itself)
We're currently only providing the body. Express will set the status code to 200
by default. To set a different code use the response.status
method:
You can chain this together with send
to make it shorter:
Express will automatically set some headers describing the response. For example since we called send
with a string it will set the content-type
to text/html
and the content-length
to the size of the string.
You can set your own headers using the response.set
method. This can take two strings to set a single header:
Or it can take an object of string values to set multiple headers:
We aren't limited to plaintext in our body. The browser will parse any HTML tags and render them on the page. Change your handler to return some HTML instead:
Visit http://localhost:3000 again and you should see an h1
rendered.
Since we're rendering HTML using strings we can insert dynamic values using template literals. Let's add the current time to the response:
We aren't limited to a text response. Lets send some JSON as well. Add a new route to your server:
HTTP response bodies are always strings, so Express will automatically convert our object to a JSON string for us. It will also set the content-type
header to application/json
.
Visit http://localhost:3000/json and you should see a JSON object with a message
property.
Sometimes we want to redirect the request to another URL. You can use the response.redirect
method for this. Add a new route:
Now if you visit http://localhost:3000/redirects in your browser you should end up back on the home page. If you look at the network tab in the dev tools you'll see two requests.
First a request to /redirects
. This has a response status code of 302
and a location
header pointing to /
. This tells the browser to then make a second request to /
.
Sometimes you can't know in advance all the routes you need. For example if you wanted a page for each user profile: /users/oli
, /users/dan
etc. You can't statically list every possible route here. Instead you can use a placeholder value in the path to indicate that part of it is variable:
We use a colon (:
) to indicate to Express that any value can match a part of the path. It will put any matched values on the request.params
object so you can use them.
If you visit http://localhost:3000/users/oli you should see "Hello oli". If you visit http://localhost:3000/users/knadkmnaf you should see "Hello knadkmnaf".
Try visiting http://localhost:3000/not-real in your browser. You should see Cannot GET /not-real
. This is Express' default response for when no handler matches a path.
You can customise this by putting a "catch-all" handler after all your other routes. If no other route matches then this will be used (since Express matches them in the order they are defined).
We can use the server.use
method to create a handler that will match any method/route:
Reload http://localhost:3000/not-real and you should now see your custom response.
Express route handlers don't have to send a response. They actually receive a third argument: the next
function. Calling this function tells Express to move on to the next handler registered for the route.
Let's add another handler for the home route. It will just log the request, then move on to the next handler:
If you run this code and refresh the home page you should see GET /
logged in your terminal.
The route methods accept multiple handler functions, so you can actually pass them all in one go. This does the same thing:
Express calls handlers that don't send a response "middleware". Our example here isn't that useful, but we could change it to run before all requests. We can do this with server.use
:
Now we'll get a helpful log like GET /
in our terminal when we load any page. Without middleware we would have to copy this into every route we wrote.
It's common to have some static files that don't change for each request. E.g. CSS, images, maybe some basic HTML pages. For convenience Express includes a built-in middleware for serving a directory of files: express.static
.
Create a new directory named public
. This is where we'll keep all the files sent to the client. Create a public/style.css
file with some example CSS.
Finally configure the middleware to serve this directory:
The server will now handle requests to http://localhost:3000/style.css and respond with the file contents. Note that there is no public
in the final URL: Express serves the files from the root of the site.
So far we've only created GET
handlers. Let's add a POST
handler to see how we'd deal with forms submitting user data to our server:
We can't make a test POST
request as easily in our browser, since that would require a form. Instead we can send a request from our terminal using the curl
program.
Open a new terminal window/tab and run:
You should receive a response of "Thanks for submitting". You can add the --verbose
flag to see the entire HTTP request/response. If you check the terminal where your server is running you should see "posted" logged.
A POST
request that doesn't send any data isn't very useful. Usually a form would be submitting some user input. We can add data to our curl
request with the -d
flag:
However since bodies can be large they come in lots of small chunks. This means there's no simple way to just access the body. Instead we must use a "body parser" middleware.
For convenience these are included as part of the Express module. Request bodies can come in different formats (JSON, form etc), so we must use the right middleware. We want express.urlencoded
, which is what forms submit by default. This is a function we call to create our middleware:
This middleware will wait until all the submitted data has been received, then add a body
property to the request
object. We can then read this property in our handler.
If you use curl
to send another POST
request you should see something like { name: 'oli' }
logged in your server terminal.
That's it, you've learnt the basics of using Node and Express to create an HTTP server.
This workshop is an introduction to using Git from the terminal
This workshop will help you practise with Git on the terminal. Follow these steps to connect a local repository to a remote one on GitHub. Before following these steps, you'll need to .
This workshop is an advancement on concepts covered in . Completing that workshop is not a requirement to understand this one, however you might prefer to focus on the fundamentals there before diving into this workshop.
This workshop assumes you have a working knowledge of .
Create a folder on your computer and give it a name relevant to your project. Your local folder should share a name with the repository you create on GitHub.
Open up your terminal and navigate to the folder where you've saved your files.
Initialise this folder as a Git repository. You can do this by running the command git init
.
Add files for HTML, CSS and JavaScript. Add some content to the HTML, link the CSS and add a script tag which points to your JS file.
Staging defines which files we'd like to add to our next commit. When staging, you can specify files to stage, or using the .
you can stage all the files in your current directory.
To stage our index.html
only:
To stage all files in the directory:
Committing is like saving your progress at a point in time. You are telling Git that all the changes staged should be tracked.
The -m
flag allows you to write a commit message in the command line.
It's conventional to use the imperative verb - for example "change this" rather than "changing this".
Aim for your commit history to describe precisely and concisely what you did in that change. Someone reading your commit history should be able to identify at a glance what changes were made.
Create a new repository on GitHub. You can do so by navigating to your repositories and clicking the green 'new' button.
The easiest way to set this up is without a README to avoid conflicts in the Git history. When you land on the repository page, Git will give you commands for connecting your local repository to the remote; renaming your main branch; and pushing your changes.
First, you're pointing Git to the url of your remote repository.
The -u
flag here will set the default remote branch so when you make your next push, you'll only need to type git push
. You can batch multiple commits and push them all together.
Deploy your site to GitHub Pages in the repository settings. You'll receive a link to the live version of your site.
Add this url to your README.md
and repository description.
It can take a few minutes for GitHub to deploy the repository for the first time. You might need to wait a little while before you see your changes live.
Once you have a local and remote repository connected, you'll be able to keep both in sync by regularly staging, committing and pushing your changes to GitHub.
You should commit once you have completed a change or feature, not after writing certain amount of code, and don't wait until a project is complete. Getting into the habit of making small commits often will give you a good level of practice with Git. Regularly pushing your changes will ensure your codebase is backed up and version-controlled. Additonally, you'll have GitHub activity on your profile (green squares).
Once you have a local and remote repository connected, keep them in sync by regularly staging, committing and pushing your changes.
After you've saved your changes, stage them using git add
.
Commit your staged files using git commit -m "What you did..."
.
Push your changes to your repository using git push
.
Learn how to use GitHub Projects for effective project management and collaboration.
This workshop will guide you through the process of setting up and using GitHub Projects for effective project management and collaboration. You'll learn how to create a project, add tasks, link them to issues, and use different views to track progress.
Despite its name, GitHub Projects is not tied to a single repository. It's a project management tool that can be used across multiple repositories or even without any repository at all.
GitHub Projects is a flexible tool for planning and tracking work.
It can be used for various purposes, from software development to event planning.
Projects are not automatically tied to repositories when created.
Go to your GitHub profile or organization.
Click on "Projects" in the top navigation bar.
Click "New project".
Choose a template or start from scratch.
Give your project a name and description.
Like repositories, GitHub Projects can be public or private, and you need to manage access to them.
Go to your newly created project.
Click on the three dots (...) menu in the top right corner.
Select "Settings".
Go to "Manage access".
Add collaborators as needed.
GitHub Projects offers different views to help you visualize your work:
Board view: A Kanban-style board for task management.
Table view: A spreadsheet-like view for detailed tracking.
Roadmap view: A Gantt chart for timeline visualization.
Let's explore each of these:
In your project, click on the dropdown next to the current view name (likely "Board").
Select each view and observe how the information is presented differently.
In the Board view:
Click the "+ Add item" button at the bottom of a column.
Type a title for your task.
Press Enter to create the task.
Converting a task to an issue allows for better traceability and automation.
Click on a task to open its details.
Click on the "Convert to issue" button.
Select the repository where you want to create the issue.
Click "Create issue".
Click on the three dots (...) menu next to the task.
Select "Convert to issue" from the dropdown menu.
Click on a task to open its details.
In the sidebar, click on "Assignees".
Select the team member you want to assign the task to.
The Roadmap view helps in setting deadlines and visualizing the project timeline.
Switch to the Roadmap view.
Click and drag on a task to set its start and end dates.
Observe how tasks are arranged on the timeline.
Go to your repository on GitHub.
Click on the "Projects" tab.
Click "Link a project".
Select your project from the list.
Repeat these steps for each repository you want to link to the project. This allows you to:
Track issues and pull requests from multiple repositories in one place
Create cross-repository dependencies
Get a holistic view of your entire project, even when the code is distributed across different repositories
Remember, you can link as many repositories as needed to a single project, making it easier to manage large, multi-component software projects.
GitHub Projects allows you to manage automation workflows based on repository activity.
In your project, go to "Settings" > "Workflows".
You'll see a list of predefined workflows that you can enable or disable.
To adjust a workflow, click on it to edit its settings. For example, make sure the workflow "When an issue is closed, set its status to Done" is enabled. (Note: This workflow may already be enabled by default in some project templates).
Practice deploying a PostgreSQL database to Heroku, plus some advanced SQL commands.
You don't want your deployed production app talking to a database running on your laptop. This would be slow, insecure and require you to leave it turned on all the time.
Instead we can host our production database on a 3rd party service like Heroku. This is especially convenient if we're already hosting our production server on Heroku.
Follow these .
Once you're done you should have a connection string that looks something like this:
You can connect to the remote database from your terminal by running:
Let's practice some more advanced SQL commands. There's a bunch of data about various FAC cohorts in init.sql
. You'll need to read this to figure out exactly what tables you're working with.
You may have to search the internet for SQL you haven't seen before. is a good resource.
There's usually more than one way to get the right answer. If your solution is different that's fine!
Download the starter files
Connect to your Heroku database with psql your_database_url
Insert the data into your DB with \i init.sql
You can check everything is set up by listing the database tables with \dt
. You should see four FAC-related tables: cohorts
, students
, projects
and students_projects
.
Cohort locations
List the names of all cohorts that took place in Finsbury Park.
Expected result
Student locations
List the usernames of all students who attended FAC in Finsbury Park.
Expected result
Student locations
List the username of each student along with the location of their cohort.
Expected result
Students with projects
List all project names with the usernames of the students who worked on them.
Expected result
Bonus: Students with projects by location
List all project names with the usernames of the students who worked on them, only for students who attended FAC in Finsbury Park.
Expected result
Practice collaborating on code using Git and GitHub, including branches, pull requests, and resolving merge conflicts.
An exercise to practice git workflow skills. The workshop should be undertaken by two programmers, working on two computers.
Note: you may see references to a master
branch in diagrams or external resources. This used to be the name of the default Git branch, but this was changed to main
last year. New repos should all have a main
branch, so that's what you should use.
You're working in a team of two on a project for a new client. Steps 1 to 8 in this section should be completed by one of you, who we'll refer to as Programmer 1
.
Programmer 1 creates a new GitHub repo and clones it.
Create a new GitHub repo on Programmer 1's profile, making sure to initialise it with a README.md
Go to "Settings > Collaborators" and add Programmer 2 so they can access the repo
Programmer 2 should check their email and accept the invite to collaborate
Clone this new repository using your terminal.
Move into the newly created directory.
This is what your remote and local repositories look like after this. HEAD is a reference to your current location.
Normally you would decide on which "features" you were going to build and then break these down into smaller tasks before starting the work. These tasks can be tracked with GitHub issues.
Raise a new issue with a descriptive title.
In the body of the issue, provide more detail about how to complete the work.
Assign yourselves to this issue.
Create a branch with a unique and descriptive name. For example, create-heading-with-shadow
.
Leave the main branch by switching to the new branch you have just created.
Alternatively you can do this in a single step by using the -b
flag to tell the git checkout
command to create the new branch:
An easy way to check which branch you are working on is to look at the VS status bar. In the following example, the branch is 'FAC30_updates.'
By clicking on the branch name, you can view all branches, both local and remote (those that are in the repository but not on your local machine)."
This can be useful to get the big picture, but we highly recommend using the command line instead. The equivalent command to show all the branches is:
Now we need to write some code to add the new feature.
Add the following code into a new file called index.html
.
Note: you may notice errors in this code. This is deliberate—we'll be fixing them later on in the workshop.
Create a new file called style.css
which contains:
Staging changes in Git:
Add index.html
and style.css
to the staging area.
If you know you definitely want to stage all your current changes you can save some typing and use:
The message you type to describe each commit is important, since it will be preserved in the history of the project for future contributors. It should be descriptive and relatively high-level—someone can always read the code to find out specifically what you changed.
For example this message is not descriptive enough: "update title". This one is a bit too descriptive: "Use an h1 element with a classname applying nice text shadow CSS". This one has a good balance: "Add new page heading element with styles".
Commit the files that are in the staging area.
One final note about committing: Take a moment to review your changes before confirming your commit. While unstaging changes is straightforward, there's no simple "uncommit" command. Although it's possible to undo a commit, it can be a complex process, especially if you've already pushed the commit to a shared repository. It's always better to carefully consider your commit before finalizing it.
After committing your changes locally, your remote repository on GitHub remains unchanged. To synchronize your local changes with the remote repository, you need to push your changes.
Ensure you're on the correct branch: Before pushing, double-check that you're on the branch you want to push:
This should show create-heading-with-shadow
with an asterisk next to it.
Push the create-heading-with-shadow branch to the "origin": The "origin" refers to the GitHub repository that you originally cloned from. Use the following command:
This command tells Git to push your local create-heading-with-shadow branch to the same branch on the remote repository.
Check the push result: After pushing, Git will display a message indicating the result. If successful, it will show something like:
Verify on GitHub: After pushing, visit your GitHub repository in a web browser. You should see your new branch listed, and it will contain the changes you just pushed.
After pushing your changes to GitHub, the next step is to create a Pull Request (PR). A PR is a way to propose changes from a branch to the main codebase and request review from your teammates.
Navigate to the repository on GitHub:
Open your web browser and go to the GitHub page of your repository.
Initiate the Pull Request:
You should see a prompt suggesting to create a PR for your recently pushed branch. If not, click on the "Pull requests" tab, then click the "New pull request" button.
Select the branch you want to merge (in this case, create-heading-with-shadow) into the main branch.
Set up the Pull Request:
3.1 Add a descriptive title:
Choose a clear, concise title that summarizes the changes (e.g., "Create page heading with shadow effect").
Good titles help reviewers quickly understand the purpose of the PR.
3.2 Write a detailed description in the body:
Explain what changes you've made and why.
Mention any potential impacts or dependencies.
If applicable, include steps to test the changes.
Link the PR to the relevant issue:
Use keywords like "Relates #1" to reference the issue without closing it automatically.
Use "Closes #1" or "Fixes #1" if this PR should close the issue when merged.
3.3 Select reviewers and assignees:
Assign Programmer 2 as the reviewer. They will be notified to review your changes.
You can also assign yourself or Programmer 2 to the PR, indicating who's responsible for moving it forward.
Preview and submit:
Review all the information you've added to ensure it's complete and accurate.
Click "Create pull request" to submit it.
Post-creation actions:
After creating the PR, you can still edit its description, add comments, or include additional commits to the branch.
GitHub will automatically run any configured checks or integrations.
You usually shouldn't merge your own pull requests. A PR gives the rest of your team the chance to review before your changes are merged into main
. In your projects, you will be asking the other pair to do this.
Programmer 2 reviews the changes. This is where you'd leave any feedback or request changes to be made.
Programmer 2 merges the pull request
Now your remote repo looks like this:
After the pull request is merged, you should address the related issue: If you included "Closes #1" (or similar closing keywords) in your commit message or pull request description, GitHub will have automatically closed the associated issue. If not, you should manually close the issue that tracked this feature, as the work is now complete and merged into the main branch.
---https://github.com/foundersandcoders/coursebook.git
Your quality assurance engineer has just noticed some problems with the recent change to the website.
Spelling mistake in the heading (the word 'WORKSHOW' should be replaced with 'WORKSHOP')
The classname applied to the h1
is wrong, so the styles aren't applying (class="some-heading"
should be replaced with class="page-heading"
).
Programmer 1 will fix the first problem and Programmer 2 will fix the second. From this point on you both need to work on separate computers.
Note: Only one line in the index.html
file needs to be modified.
Programmer 2 also needs a copy of the repo, since they haven't worked on it yet
Create the following two issues and assign each one to a different person
Fix typo in page heading
(Programmer 1)
Correct the classname of page heading
(Programmer 2)
Remember or take note of the issue numbers when you create them, as you will need these later on.
Git branches are used to make sure each person can work independently without affecting the code others in the team are working on.
Both programmers create one branch each:
git checkout -b fix-typo-heading
(Programmer 1)
git checkout -b update-class-heading
(Programmer 2).
It's important to avoid making unrelated changes as you work. It can be tempting to just quickly fix an error if you spot one while doing some other work. However this makes the Git history of changes really difficult to track. It's also confusing to review a pull request that has lots of unrelated changes.
Programmer 1 fixes only the spelling typo in the heading (WORKSHOW -> WORKSHOP).
Programmer 2 updates only the class name of the heading (class="some-heading"
-> class="page-heading"
).
Both programmers save their index.html
files.
Both programmers check the status of their files, to confirm that index.html
has been modified.
Both programmers add their modified index.html
file to the staging area.
Both programmers should commit their changes. Remember to use a multi-line commit message that references the relevant issue. (Refer back to the issue numbers you noted when you created them.)
Important: don't work in parallel from here. We want to push, PR and merge Programmer 1's change first, then move on to Programmer 2's change.
Before pushing your branch, it's crucial to incorporate the latest changes from the remote main
branch. In real-world projects, multiple team members often contribute code simultaneously, which can lead to divergence between your branch and the main codebase. To minimize conflicts and ensure your changes integrate smoothly:
First, always fetch and merge the latest updates from the remote main
branch into your working branch.
Then, resolve any conflicts that may arise from this merge.
Only after successfully integrating the latest main
changes should you push your branch.
Let's integrate this workflow in our workshop:
Programmer 1 switches to main
branch.
Programmer 1 pulls any changes from the main
branch of the remote (GitHub repo). There should be no changes since neither of you has pushed any changes yet.
On the default branch you can use a shorthand, since Git knows which remote branch to use:
Programmer 1 switches back to the fix-typo-heading
branch.
Since there were no new changes to deal with Programmer 1 can move on to pushing.
Programmer 1 pushes their fix-typo-heading
branch to remote
Programmer 1 creates a pull request.
Don't forget a descriptive title/body (and link the relevant issue in the body)
Assign Programmer 2 to review
Programmer 2 reviews the pull request
Step through each commit (in this case one)
Check the "Files changed" tab for a line-by-line breakdown.
Click "Review changes" and choose from "Comment", "Approve" or "Request changes"
Programmer 2 merges the pull request
Note: now Programmer 1's changes are merged we can move on to Programmer 2's
Remember it's always a good idea to check for any new changes on the remote before pushing your branch. In this case we know that Programmer 1's branch was just merged, so there will be changes. Once we've pulled them to the local main
branch we'll need to merge them into the update-class-heading
branch.
Programmer 2 switches to main
branch.
Programmer 2 pulls the remote main
branch
Programmer 2 switches back to the update-class-heading
branch.
Programmer 2 tries to merge main
branch into update-class-heading
branch.
At this point there should be a "merge conflict". Move on to the next section to find out how to resolve this.
The code between <<<<<<< HEAD
and ======
is the current change on this branch. The code between the ======
and >>>>>>> main
is the change from the main
branch that we are merging in.
You can resolve the conflict by manually editing the code to leave only the change you expect. You can also use VS Code's built-in options to choose either the HEAD
or main
change (or both). You also need to make sure to remove the conflict marker lines, since those are not valid HTML code. Finally you need to make a new commit for the merge.
Programmer 2 removes HEAD and main markers
Programmer 2 manually merges the two different h1
lines to keep both new changes
Programmer 2 adds the index.html
file to staging area and commits the merge changes.
Programmer 2 pushes the update-class-heading
branch to remote.
Programmer 2 creates a pull request.
Don't forget a descriptive title/body (and link the relevant issue in the body)
Assign Programmer 1 to review
Programmer 1 reviews the pull request
Step through each commit (in this case one)
Check the "Files changed" tab for a line-by-line breakdown.
Click "Review changes" and choose from "Comment", "Approve" or "Request changes"
Programmer 1 merges the pull request
That's it, you have successfully followed the GitHub flow to add a new feature and fix some bugs.
Both Programmer 1 and Programmer 2 can switch back to the main
branch and pull the remote changes. They should also both delete their other local branches since they are now merged. The final step should be to close any open issues (if the PRs didn't do this automatically).
Screenshot of a Lighthouse audit of this website
Running the command without the -m
flag will open up an editor in your terminal where you can write a commit message. Exit this by hitting esc
and typing :wq
. You can to avoid this.
Second, you're renaming your branch to main
. in 2020 to dissociate from using the term master
which can have negative connotations.
For the sake of this exercise, we're just going to at the moment. Your client wants a beautifully styled heading for the homepage. It should be bold black writing with a background shadow that makes it stand out.
This is how the issues console looks on GitHub.
There are many types of workflow. At FAC we use , where the main
branch is always deployable. In this flow, each branch is used for a separate feature.
Staging in Git is like a preparation area for your next commit. When you modify files in your project, you can choose which changes to "stage" using the git add command. These staged changes are what will be included in your next commit. This allows you to selectively commit only certain changes, even if you've modified multiple files. Think of it as a way to review and organize your changes before making them permanent in your project's history. For a more detailed explanation, you can refer to .
The history of a project is made up of . Each commit is a snapshot of your whole repository at one particular time.
Here are some on writing better, more useful commit messages.
It's also important to link your code changes to the issues that track them. GitHub lets you use a hash symbol followed by a number to . For example if the message includes Relates #1
it will show this commit in issue number 1 on the GitHub repo. If a commit totally fixes an issue you can use Closes #1
, and GitHub will automatically close the issue when the commit is pushed to GitHub.
Here we're using a second -m
flag to add another line to our commit message with the extra issue info. You could also just run git commit
, which will open your so you can write longer commit messages in a more comfortable environment.
This conflict occurred because the line with the <h1>
heading was changed by Programmer 1 and Programmer 2. Git doesn't know how to merge the two different versions of this line together, so it needs you to do it manually. Merge conflicts are like this:
name
14
15
16
17
username
virtualdominic
charlielafosse
starsuit
bobbysebolao
albadylic
reubengt
username
location
eliascodes
Bethnal Green
oliverjam
Bethnal Green
yvonne-liu
Bethnal Green
matthewdking
Nazareth
helenzhou6
Bethnal Green
virtualdominic
Finsbury Park
charlielafosse
Finsbury Park
starsuit
Finsbury Park
bobbysebolao
Finsbury Park
albadylic
Finsbury Park
reubengt
Finsbury Park
name
username
FACX Machine
oliverjam
FACX Machine
yvonne-liu
Hamster Hotel
oliverjam
Hamster Hotel
starsuit
Agony Yaunt
starsuit
Agony Yaunt
bobbysebolao
name
username
Hamster Hotel
starsuit
Agony Yaunt
starsuit
Agony Yaunt
bobbysebolao
Learn how to use a Postgres database on a Node server
This workshop covers how to connect your Node server to a Postgres database using the node-postgres
library.
Before you begin make sure you have installed Postgres.
Download the starter files
cd
into the directory
Run npm install
The starter files include some dependencies and database setup. As you work through the workshop you should read the corresponding files and try to understand what the code does. Each file includes explanatory comments to help.
In order to run our app locally we'll need a Postgres database running on our machine to connect to.
You can create a new Postgres user and a new Postgres database owned by that user with these two commands:
If this succeeds you shouldn't see any output in your terminal. You can check it worked properly by listing your databases with this command:
You should see the new learn_pg
database in the list, like this:
You can list all the tables in your new database with this command:
You should see "Did not find any relations." This is because our database is empty—we haven't created any tables or inserted any data.
You can populate your database by running SQL commands to create tables and insert data. Doing this manually would be slow and repetitive, however you can run them from a file in the repo instead:
The /database/init.sql
file contains SQL commands to create the tables we want, then insert some example data. You can re-run this command to wipe your DB and start from scratch whenever you need to.
This is dangerous. If you run this on your production database you'll delete all your data.
We can query our DB manually from the terminal using psql
, but that doesn't help us build an app. We need a way for our Node server to connect to the DB. To do this we use the node-postgres
library. You need to install this into the project with:
Our app needs to know the database's address. Postgres runs a local server so you can talk to the DB. The full URL (also known as the "connection string") for your database will be:
You could hard-code this URL into our app code, but this address is only correct for the database running on your computer. When someone else clones your repo they'll have their own DB set up.
It's best to read configuration like this from an "environment variable" (env vars). This is like a JS variable, but set in your terminal before a program runs. You can set them before you start your application, like this PORT=3000 node server.js
. You server can then read this value to know what port your app should listen on.
Take a look at the /database/connection.js
file. It imports the pg
library, then connects to the right DB by passing in the connection string. It reads this from the DATABASE_URL
env var, which means we must make sure this is set before starting our server.
Rather than type DATABASE_URL=postgres://... npm run dev
every time, we can rely on the popular dotenv
library. This allows us to define env vars in a file named .env
. We gitignore this file so each person who clones the repo can make their own with their own personal DB URL.
First install dotenv
as a dev dependency:
Then create a .env
file at the root of your project containing:
Then change your dev
npm script in the package.json
file to this:
The -r dotenv/config
bit tells the dotenv
library to read the .env
file and pass all the values inside it to your application. You can then access them in your JS code with process.env.VAR_NAME
.
Now our server knows how to talk to our database we can start using it in our route handlers. First let's make our home route list all the users in the database.
Open workshop/routes/home.js
. To access our DB we need to import the pool object we created in connection.js
:
This db
object has a .query
method that sends SQL commands to the database. It takes a SQL string as the first argument and returns a promise. This promise resolves when the query result is ready.
Let's get all the users in the DB:
Refresh the home page and you should see a big object logged in your terminal. This is the entire result of our database query.
The bit we're actually interested in is the rows
property. This is an array of each matching entry in the table we're selecting from. Each row is represented as an object, with a key/value property for each column.
You should see an array of user objects, where each object looks like this:
Since DB queries return promises we need to make sure we send our response inside the .then
callback. Let's send back a list of all users' first names:
Refresh the page and you should see an unordered list containing each user's first name.
Challenge
We're currently querying for too much data: we only need the username
, but we're getting every column. For very big data sets this could be a performance problem.
Amend your query so it only returns the column we need.
Navigate to http://localhost:3000/create-user. You should see a form with fields for each column in our user database table. It submits a POST
request to the same path. The post
handler logs whatever data was submitted. Try it now to see it working.
We want to use the INSERT INTO
SQL command to create a new user based on the user-submitted information.
Safely handling user input
Including user-submitted information in a SQL query is dangerous. A malicious user could enter SQL syntax into an input. If we just inserted this straight into a query this would mean they could execute dangerous commands in our DB. This is one of the most common causes of major hacks, so it's important to prevent it.
You should never directly insert user input into a SQL string:
If the user typed ; DROP TABLE users;
into the username
input we'd end up running that command and deleting all our user data!
The pg
library uses something called "parameterized queries" to safely include user data in a SQL query. This allows it to protect us from injection. We can leave placeholders in our SQL string and pass the user input separately so pg
can make sure it doesn't contain any dangerous stuff.
We use $1
, $2
etc as placeholders, then pass our values in an array as the second argument to query
. pg
will insert each value from the array into its corresponding place in the SQL command (after ensuring it doesn't contain any SQL).
Challenge
Edit the post
handler function in create-user.js
to save the submitted user data in the database. Make sure to use a parameterized query.
So far we've only touched the users
table. Let's make the posts
visible too.
Add a new route GET /posts
This should display a list of all the posts in your database
Once that's working amend the handler to also show the username of each post's author.
Hint: You'll need to use a join to get data from both tables in one query.
The best place to start is a Chrome Lighthouse accessibility audit. This will tell us the obvious things to fix. There are 5 problems to fix.
Background and foreground colors do not have a sufficient contrast ratio.
We should make the body color darker until we have at least a 4.5:1 contrast ratio. Otherwise it's difficult to read the text.
Document doesn't have a <title>
element
We should add a <title>
containing a unique title for this page.
Image elements do not have [alt]
attributes
The <h1>
contains an image of some text, and it doesn't have any alternative text. This means the title of the page is effectively hidden from screen readers.
There are two possible solutions: either add an alt attribute containing the title text, or replace the image with actual text, using a web font to make it look correct.
Cool trick: you can add &text=mytitle
to a Google Font URL and it'll only load the characters required for that text. This is useful if you only need a custom font for a small piece of text like a heading.
Heading elements are not in a sequentially-descending order
The headings on the page jump from h1
to h3
, then h4
. We shouldn't skip the h2
level just because we prefer smaller titles for the drinks. We should use the right sequential heading levels and style the h2
s to be smaller.
<html>
element does not have a [lang]
attribute
We should add a lang="en"
attribute to the page so visitors know it's in English.
There are more accessibility problems that Lighthouse cannot catch automatically.
The nav menu toggle is not usable with the keyboard
We should use a <button>
instead of a <div>
, as they are focusable and usable with the keyboard by default. They're also announced as interactive by screen readers. It's possible to make this work with a div but you'd need to add a lot of custom JS.
The nav menu toggle has no label
The SVG icons might be obvious to sighted users, but a screen reader user has no idea what the button does. There are lots of ways to add labels. We could put some visually-hidden text inside the button, or add an aria-label
to it.
In this case the simplest solution is to label the SVGs themselves, since we're already toggling them when the button is clicked. We can label an SVG by adding a <title>
element inside.
The alternative text for the drink images is empty.
Empty alt text is fine for purely decorative images, but you could argue that these images add important information. They show the user how the drink should look, what glass it is served in, what garnish it should have etc. We should add alt text describing these things.
Links have no interactive styles.
There's no indication that the <a>
tags are any different to the rest of the text. Links are styled blue and underlined by default—you need some way to indicate to users they can click them.
The CSS also disables the focus outline for links. This makes it impossible for a keyboard user to know when they have focused a link. You may think the default focus styles are ugly, but it's important to replace them with your own styles rather than disable them entirely.
"Read more" links are repetitive
There's no way for a screen reader user to know where each "Read more" link will take them (without listening to the entire href
to figure it out). Link text should describe where the link will take the user—avoid generic "click here" or "read more" where possible.
In this case it might make more sense for the link to just be the title, which contains the drink's name.
No way to scroll the carousel with the keyboard
The horizontally scrolling drinks carousel is not controllable via the keyboard. Usually users can scroll the page with the arrow keys, but that doesn't work for scrollable containers.
We can fix this by making the carousel focusable with tabindex="0"
. Now users can tab to the carousel and use their arrow keys to scroll it.
Learn how to use integration tests to make sure the different parts of your application work together correctly.
Integration tests check that whole features of your code work correctly. This usually involves checking several units of code at once.
Usually some of your code is devoted to "application logic". This is where you coordinate several other bits of code, possibly with branching logic depending on some conditions. Imagine we were building a calculator:
We could individually unit test all the small maths functions, but that would be a lot of work. Instead we can write tests for the calculate
function. If we verify that gives the right answer then we know the smaller functions must work. We're also testing how our application integrates those small units together. If we only unit tested our maths functions we could miss that our app was still totally broken (e.g. if there was a mistake in our switch
).
Open workshop/index.js
in your editor and read the calculate
function
Open workshop/index.test.js
and write tests for the calculate
function.
The equal
, notEqual
& test
functions from Learn Testing are included on the page.
You should have one test for each branch of the switch statement.
Open workshop/index.html
and check the console to see your test results
Don't worry about the UI on the page for now
What happens if we provide non-numerical input?
Write a test that calls calculate
with strings instead of numbers.
Change calculate
so that it can handle numbers passed as strings
hint: have a look at parseFloat
Integration tests can also check where our code integrates with things outside our control. For example web apps often have to update the DOM to show results to the user. We didn't write the DOM code (that's part of the browser), but we still need to make sure our code integrates with it correctly.
We can write our tests to simulate a real user browsing the site. For example here is a form that takes user input and makes it uppercase:
Imagine we wanted to check that our code worked correctly. We would open the page in our browser, then follow these steps:
Find the input we want
Change the input's value to "test"
Click the submit button
Check the result on the page is "TEST"
We can write an automated test using JS that does exactly the same thing.
Open workshop/index.html
in your editor. You should see a basic calculator form.
Add a test to workshop/index.test.js
that checks the form works correctly, just like the example above.
Learn how to test that small individual parts of your code work in isolation.
Unit tests make sure that the smallest building blocks of an app are working correctly. Generally this means testing individual functions that do one thing. Unit testing is relatively simple to get started with, since you don't have to worry about how different parts of your code interact. You just call a single function and check the result was what you expected.
Testing is easier if you structure your code a certain way. This means making sure you have single-responsibility functions: e.g. calculateTotal
and updatePage
rather than a single updatePageWithTotal
that does both at once.
It's also much easier to test "pure" functions. This means functions that always return the same output when given the same input (without changing anything else).
For example this is not pure:
because it changes the url
variable outside the function. It's not safe to call this multiple times:
This makes it tough to test since each time we run the function things are different.
This version is pure:
which makes it much easier to test, since the results are predictable.
Open workshop/index.js
in your editor and read the makeUrl
definition
The equal
, notEqual
and test
functions from the Learn Testing workshop are included on the page for you to use.
Open workshop/index.test.js
and write a test for makeUrl
Open workshop/index.html
and check your test passes
Sometimes we'll need to test functions that return objects or arrays. This can be awkward as objects that look the same are not equal to each other. For example:
Although x
and y
have the exact same properties here, they are totally different objects. This means we cannot use the normal equality operators to check them.
We can work around this by testing if specific properties of objects are the same. For example:
We can do the same for array elements (e.g. checking that the first thing in both arrays is the same).
Bear in mind this doesn't guarantee that all the properties are the same, just the ones you check.
Open workshop/index.js
in your editor
Write a searchParamsToObject
function
It should take a form-encoded string like "name=oliver&email=hello@oliverjam.es"
It should return an object like { name: "oliver", email: "hello@oliverjam.es" }
Write a test for this function in workshop/index.test.js
Unit tests are great for checking edge-cases. Since a unit is usually small and self-contained we can check it with all kinds of different input to make sure we get what we expect.
This is where manual testing gets very tedious: manually entering 0
, then -1
, then ""
, then "hello"
, then 99999999999999999
into an input to see what happens would take forever.
A leap year has an extra day (February 29th) to account for a solar year being about 365.24255 days long (not exactly 365). Leap years usually occur every 4 years, but in order to stay consistent there are extra rules: years divisible by 100 are not leap years, and years divisible by 400 are leap years.
For example 2020
and 2000
were leap years, but 1900
was not.
Write an isLeapYear
function in workshop/index.js
It should take a year and either return an error message or a boolean
Write several tests to check your function works correctly
Make sure you account for and test all possible edge-cases
What happens if you pass a string?
What happens if you pass a negative year?
Learn how to fetch data from APIs in your React components
React doesn't have a built-in pattern designed for fetching data. This can make it a little confusing at first, so let's look at how we can combine useState
and useEffect
to create a GitHub profile page.
Download starter files and cd
in
npm install
npm run dev
The index.html
file loads workshop/index.jsx
. This imports workshop/App.jsx
and renders the component using React.
React components are designed to keep the DOM in-sync with your app's data. For example this component will re-render every time the name
prop changes, ensuring the message is always correct:
However some parts of your app cannot be represented with JSX, as they are not part of the DOM. React calls these "effects"—they are extra things your component does (other than the primary task of rendering DOM elements).
In order to ensure React can keep track of these effects and re-run them when our app's data changes we pass them in to the React.useEffect
function.
Fetching data is one of these "effects". We run our fetch
request inside useEffect
so React can control when it runs (or re-runs).
We have a problem here: our API request could take 10 seconds to finish. However React components are synchronous—they must render something straight away. We cannot wait for the response to be done before returning a value.
Instead we need to update our component with the new data once the response finishes. We can make a component update by setting state. Remember that a component will re-run whenever its state values change.
We can't use this data immediately, since the API request is asynchronous. Our component will render at least once with the initial state, which here is null
.
The easiest way to make sure the data has loaded before we use it is to check whether the state variable is there:
Here's the flow of our component's updates:
The component is rendered (i.e. <Pokemon />
somewhere)
React calls the Pokemon
function
React creates the pokeData
state (because we called useState
)
React queues an effect to run (because we called useEffect
)
pokeData
is null
so JS runs the first if
branch
The component returns <div>Loading...</div>
The queued effect runs, which sends a fetch
request
Some time passes...
The fetch
request resolves with the response data
Our .then
sets the pokeData
state as the response object
React sees the state update and re-runs the component function
This time the pokeData
state variable is the response object (not null
)
So JS runs the second if
branch and returns <div>pikachu</div>
There is one final problem to solve: our component currently always queues a new effect. This means that after our component's state updates (and re-renders the component) it'll send a new fetch
request. When this request resolves it'll update the state, re-rendering the component. This will trigger another fetch
, and so on.
To avoid this infinite loop we need to constrain when the effect runs, by providing the dependencies array as the second argument. This tells React that the effect only needs to re-run if the things inside the array have changed.
In this case our effect has no dependencies, since it doesn't use any values from outside the effect. So we can specify an empty array:
This tells React "you won't need to re-run this effect, since it doesn't depend on any values that might change and get out of sync".
You're going to build a Profile
component that fetches a user from the GitHub API and renders their name, avatar image and any other details you like.
Create a new component in workshop/Profile.jsx
It should fetch your profile from "https://api.github.com/users/{username}"
It should render a loading message until the request is done
It should render at least your name & avatar image once the request completes
Our Profile
component would be more useful and reusable if it could fetch any user's GitHub information. Components can be customised by passing in props (just like function arguments). We want to be able to do this:
and have the component fetch that user's profile.
Amend Profile
to take a name
prop
Use this prop to fetch the right data from GitHub
Pass a name
to <Profile />
inside App
Our Profile
component can now fetch any user, but we still have to hard-code the prop when we render it in App
. Ideally we'd let users type the name into a search input, then update the prop we pass down when they submit.
We can achieve this with a state value in App
that keeps track of the current value of name
. When the form is submitted you can update that state value, which will cause the App
component to re-render. This will then cause Profile
to re-render with the new value of name
passed as a prop.
Add a form with a search input to App
Add a name
state value to App
When the form is submitted update the state value
Pass the state value to Profile
so it knows which name to fetch
The user response object from GitHub contains a repos_url
property. This is a URL from which you can fetch an array of the user's repositories. To display the user's repos underneath their other info we'll have to make another fetch after the first one resolves.
The simplest way to achieve this is by creating a new component that takes the repos_url
as a prop, fetches the data, then renders the list of repos.
Create a new component in ReposList.jsx
It should receive a URL as a prop and fetch the repos from it
When it receives a response it should render a list of repos
Amend Profile
to render ReposList
and pass in the right URL prop
Learn about testing by building your own tiny testing library.
The concept of testing code is often introduced with complex libraries. This hides the core of testing: writing some code that runs your other code and tells you if it's wrong. This workshop introduces the concept by slowly building up a useful function that helps you test your code.
Download the starter files
Open starter-files/workshop.html
in your editor
There should be a function that squares a number (multiplies it by itself). It's used like this:
Right now we only have one option for checking this works. We have to call the function, log the result, then check that result (with a calculator for bigger numbers).
If you want to check it works for lots of different numbers you'll be doing a bunch of manual work. You'll have to repeat this work every time the code changes (to make sure you didn't break it). It would be helpful to automate this process.
Since you know how to code you can begin to automate this! Write some JavaScript that calls the square
function (like above), then checks that the result is what you expect. It should log a useful message to the console using console.error("my message")
if the result is wrong.
If your test passes change your expected value so that it's definitely wrong. Can you see the failure in your browser console?
This is better than manually checking, but not much. We have to write all the same logic for checking whether the values are the same and logging every time. It would be a lot of copy/pasting to write 20 tests.
equal
Most tests check whether two things are equal. It's helpful if we extract that logic into a reusable function.
Write a function called equal
that takes two arguments and checks if they're the same. If they are it should log the success with console.info
. If they aren't it should log the failure with console.error
.
Use this equal
function to refactor your test above, then write another one to check that square(3.5)
is correct.
If your test is passing change your expected value so that it's definitely wrong. Can you see the error in your browser console?
notequal
It's sometimes useful to be able to check whether two things are not equal
Write a notEqual
function. It should be similar to equal
, but log a failure when its two arguments are the same.
Write a test that checks square(3)
does not return 10.
Right now our tests are all jumbled together. This means they share the same scope, so we can't reuse variable names. It's also hard to distinguish them in the console. It would be nice if each test had a descriptive name.
We could divide our tests up using functions, like this:
We call a test
function with a descriptive name, and pass a callback function containing our test code.
Write a function called test
that takes two arguments: a name
and a testFunction
. It should use console.group
to log a group labelled with the name
. You'll need console.groupEnd
to close the group at the end.
It should call the testFunction
callback argument so that the actual test is run.
For more complex assertions it's nice to be able to write a custom message specific to that test.
Amend your equal
and notEqual
functions so that they take a third optional message
argument. Your console.info
/console.error
should log this message. If there is no message
passed in then default to the message you were using before.
Congratulations, you've built a testing library from scratch!
We are missing some stuff (support for testing async code, a summary of total passing/failing tests), but we can get pretty far with just this.
Learn how promises make asynchronous JS easier to manage, then make HTTP requests using the fetch method
We're going to learn how to make HTTP requests in JavaScript. This is made possible by the fetch
function, which uses something called "promises" to manage async code. Here's a really quick example of what it looks like before we dive in:
Before we look at promises, lets make sure we understand what problem they solve.
JavaScript is a single-threaded language. This means things generally happen one at a time, in the order you wrote the code.
When something needs to happen out of this order, we call it asynchronous. JavaScript handles this using a "queue". Anything asynchronous gets pushed out of the main running order and into the queue. Once JS finishes what it was doing it moves on to the first thing in the queue.
This code logs 1
, then 3
, then (after 1 second) logs 2
.
It's intuitive that the above example logs 2
last, because JS has to wait a whole second before running the function passed to setTimeout
.
What's less intuitive is that this is the same even with a timeout of 0ms.
This code logs 1
, then 3
, then (as soon as possible) logs 2
.
This is because setTimeout
always gets pushed to the back of the queue—the specified wait time just tells JS the minimum time that has to pass before that code is allowed to run.
We can use callbacks (functions passed as arguments to other functions) to access async values or run our code once some async task completes. In fact the first argument to setTimeout
above is a callback. We pass a function which setTimeout
runs once the timeout has finished.
Callbacks can be fiddly to deal with, and you can end up with very nested function calls if you have to chain lots of async stuff. Here's a contrived example:
This is often referred to as "callback hell". In more realistic code with error handling etc it can be pretty hard to follow.
Here's how that would look if each function returned a promise instead:
Our code stays "flat" at the same level no matter how many async things happen.
Promises are a special type of object. They allow us to represent the eventual result of async code. A function that executes asynchronously can return a promise object instead of the final value (which it doesn't have yet).
For example when we fetch some data from a server we will receive a promise that will eventually represent the server's response (when the network request completes).
fetch
We can use the global fetch
function to make HTTP requests in the browser. It takes two arguments: the URL you want to send the request to and an options object (we'll look at that later). It returns a promise object that will eventually contain the response.
Open starter-files/workshop.html
in your editor. Add your code inside the script tag.
Use fetch
to make a request to "https://pokeapi.co/api/v2/pokemon/pikachu"
.
Assign the return value to a variable and log it.
Open the file in your browser. You should see the pending promise in the console.
Promises can be in 3 states:
pending (async code has not finished yet)
fulfilled (expected value is available)
rejected (expected value is not available).
There's a bit more complexity to this, so it's worth reading this explanation of promise states later.
So how do we actually access the value when the promise fulfills?
Since the promise's fulfilled value isn't accessible synchronously, we can't use it immediately like a normal JS variable. We need a way to tell JS to run our code once the promise has fulfilled.
Promises are objects with a .then()
method. This method takes a callback function as an argument. The promise will call this function with the fulfilled value when it's ready.
It's worth noting that you don't need to keep the promise itself around as a variable.
Use .then()
to access the result of your PokéAPI request. Log this to see what a JS response object looks like.
The promise resolves with an object representing the HTTP response (e.g. it has a status
property). However since the response body could be in many different formats there's an extra step to access it. Response objects have built-in methods for parsing different body formats.
Since the PokéAPI returns JSON-formatted data we can use the response.json()
method to access it. Accessing the body can be slow, so this is async too. The .json()
method also returns a promise, so we need to use another .then()
to access the value.
Nesting our .then()
s like this is getting us back into the same mess as with callbacks. Luckily promises have a nice solution to this problem.
.then
sThe .then()
method always returns a promise, which will resolve to whatever value you return from your callback. This allows you to chain your .then()
s and avoid nested callback hell.
The .then
s will run in order, and wait for the previous one to fulfill before starting. Here our first .then
returns the promise that the response.json()
method creates. Our second .then
only runs once that promise fulfills with the JSON data.
Use response.json()
to get the response body
Add another .then()
to log the body. You should see a Pokémon object
Sometimes requests go wrong. Promises have a built in way to control what happens when the asynchronous code hits an error. We can pass a function to the promise's .catch()
method. This will be run instead of the .then()
if the promise rejects. Your callback will be passed the error that occurred instead of the data you wanted.
Remove the URL from your fetch call. You should see the browser warn you about an "uncaught error"
Add a .catch()
to your code that logs the error instead
Note: you would usually want to do something useful with the error instead of just logging it.
We're going to use the fetch
function to get a user from the GitHub API. The API is free to access, but you might get rate-limited if you make too many requests. If you hit this problem you can fix it by generating an access token and including it in the request URL.
Write a getUser
function that takes a username argument
It should fetch that user's profile from "https://api.github.com/users/USERNAME_HERE"
It should be callable like this:
Write a getRepos
function that takes the Github user response object as an argument.
Fetch the a user using getUser
, then use getRepos
to fetch their repos using the repos_url
property from the user object.
Log the array of repos.
Fetch multiple GitHub profiles simultaneously using your getUser
function above (you'll have to call it more than once)
You might want to read the docs for Promise.all
It's important to use the right HTML tags to represent elements on the page. Let's practice using lots of different semantic HTML.
HTML is a markup language. It is used to "mark up" a document with extra information about what each thing is. This is useful because it let's us communicate the "semantics" of the document.
When we describe markup as "semantic" we mean it describes the structure of the document. It tells the browser what things are (rather than what content they contain or how they are styled).
For example we can make a <span>
look like a button using CSS:
We could even add some JavaScript event handlers to it so it behaved like a button when clicked. However from the browser's perspective it has no meaning—it's not actually a button.
You may wonder why it's important for the browser to know what an element actually is, if it looks right and behaves the same. There are a few reasons why using semantic HTML is important.
Browsers have a bunch of built-in styles and behaviours you should take advantage of. Why re-create all of that from scratch when you could just override the bits you want to change.
This also means your page will still look okay if your CSS is broken or fails to load.
You probably won't remember to reproduce everything the browser does for you. For example buttons not only respond when clicked, but also when the "Enter" or "Space" keys are pressed. Built-in elements have lots of complex behaviours that are hard (or impossible) to reproduce. Think how complicated a <select>
would be to build yourself!
Third (and most importantly) semantic HTML is machine readable. By describing the structure of our page we allow computer programs to understand it as well as humans.
For example most web browsers now have a "reader" mode. You can click an icon to get a simplified view of an article (with all the ads, cookie banners etc removed). These rely on the article using elements like <header>
and <article>
to know what bits to keep. They can also apply custom styles to make it easier to read. If everything was in a div with custom CSS this wouldn't work.
This is also very important for accessibility. The web is for everyone, including people who use other types of software to browse. For example visually-impaired users often use "screen readers", which read the page out loud.
Human brains are great at "pattern-matching"—e.g. if you see a rectangle with a blue background and rounded corners you assume it's something you can click on to trigger an action (i.e. a button). Your brain doesn't know or care if the underlying element was a div.
Computer programs like screen readers are not so good at this—they can't guess at behaviour based on how something is rendered. So instead they must use the underlying markup semantics to figure out what things are.
For example using heading tags (<h1>
, <h2>
etc) creates a page structure that lets a user quickly jump from section to section to find what they need (without waiting for the entire page to be read out loud).
This allows a visually-impaired user to get a quick overview of the structure of the page in the same was a sighted user does by scanning the headings.
There are almost 100 HTML elements nowadays, but you don't need to remember them all. The important thing is to remember that there might be a more specific tag than div or span for what you're making.
This especially applies to top-level "blocks" of the page. Fun fact: when HTML5 was in-progress the spec authors looked at thousands of existing websites to see what the most popular IDs on top-level elements were. They saw a ton of <div id="header">
, <div id="main">
and <div id="footer">
, which is why HTML5 added the <header>
, <main>
and <footer>
tags.
Here's a quick list to run through whenever you're picking an element:
Is it a meaningful area of the page?
Use an HTML5 block element like <header>
or <footer>
.
Does it label the start of a new section?
Use a heading (with the right level) like <h2>
or <h3>
Does it navigate to a new page?
Use an <a href="/page">
Does it trigger JS behaviour?
Use a <button>
Does it allow user input?
Use a <form>
containing <input>
s (with <label>
s!)
Is it just for applying some layout/styles?
Use something like <div class="grid">
or <span class="big-text">
You're going to re-write an HTML page to use semantic HTML. Try to replace as many generic elements with more descriptive semantic ones as you can. When you're done it should contain no <div>
s at all.
First you need to download the starter files. There's a box at the top of this page labelled "Download files via CLI". Click the copy button to copy the command, then run it in your Terminal. This will automatically download the files you need for this workshop.
Once you've downloaded the files you can move into that directory in your Terminal by running:
You can see what's in there by listing the files with:
You can open the challenge file in VS Code with:
And you can open the page in your default browser with:
You are of course welcome to navigate the files however you're comfortable, but it's a good idea to get some practice working in your Terminal.
Don't peek at the solution before you try the challenge!
There are a few missing things that aren't necessarily related to semantics, but are important for the page to have anyway. Try to find and fix some of these issues.
Learn how to handle different kinds of errors on your Node server
Errors (or "exceptions") stop JavaScript from running. They usually mean something has gone so wrong that the program doesn't know how to continue. This means any code after the line where the error occurred won't run. This is pretty bad in the browser as it can totally break the application for a single user, but on the server it can be much worse. A single running Node server might be responding to hundreds or thousands of requests from different users. If an error stops the code executing it stops for all of the users.
"Error" can refer to both the "exception" (line of code going wrong) as well as the "error object" that is created. For example this code will cause an exception:
If you run that in a browser you'll see an error logged: TypeError: myFunction is not a function
. You also will not see the log, as the JS stops executing your code when the exception occurs.
The error we saw above was a "TypeError". This is a more specific kind of error that is used when JavaScript expects to see one thing but got something else. In this case we tried to call a number as a function, which meant it was the wrong type of value.
There are several different kinds of built-in error. You can see a full list on MDN. Mostly you'll see TypeError
and ReferenceError
(e.g. ReferenceError: myVariable is not defined
).
We've seen how JS handles exceptions, but what about your own code? It's possible to predict points in your code where something will go wrong and create your own error on purpose. For example if we write a function to square a number we can check whether the caller actually passed a number in:
The throw
keyword causes an exception in your code (just like when a built-in JS method breaks). You can throw
anything (throw 5
, throw "hello"
etc), but it's most common to create a new Error
object and throw that. Error objects have a "stack trace", which tells the user what line of code the error occurred on.
So now our square
function behaves similarly to built-in JS methods: execution will stop if it encounters an invalid value. We can make this more specific by using a TypeError
and passing a more useful message:
All these errors will just stop our program executing. Ideally we want to catch the error and handle it somehow (maybe by providing a message to the user).
We can use a try..catch
block for this. We put all the code we think might error inside the try {}
block, and if an error is thrown the catch (error) {}
block runs with the error object.
Here we try to run our code, JSON.parse
throws an error that we catch and log to the console. Since the error has been caught within this block the rest of our code can safely continue to run (so the final console.log
still appears).
This works the same way for your errors you throw yourself, like our square
function above:
You can think of throwing an error as bypassing the rest of the code and jumping straight to the closest catch
block.
Let's use try..catch
to handle errors on a Node server.
Download the starter files, cd
in, run npm install
Run npm run dev
to start the server on http://localhost:3000
Visit http://localhost:3000/try-catch. You should see Express' default error response
Use try..catch
in the tryCatch
route handler to catch the error and send your own response to the browser
The response should have a 500
status code and a message of Server error
Don't fix the mistake in the tryCatch
handler (it's deliberate to simulate a real error)
Errors often occur in asynchronous code, as this can involve network requests or file-system access (both of which can take a long time and lose connection partway through).
We can't handle exceptions in asynchronous code using try..catch
because the try
block will have finished executing before the error occurs. For example:
If the fetch
request takes 5 seconds the try
will have finished executing long before the error occurs, which means the catch
never runs. Promises don't throw errors because that doesn't work asynchronously. Instead they reject, which is the async equivalent.
Promises have a .catch
method, which allows you to pass a function that runs if the promise is rejected.
It's important to always have a .catch
somewhere in a promise chain. Otherwise you'll get an "unhandled rejection", which could crash your program.
Let's handle a rejection. The server has a model.js
file that pretends to access a database. However the getPosts
function always rejects with an error (don't fix this!).
Visit http://localhost:3000/rejection
Your browser should timeout waiting for a response
Your server should log an error in your terminal: UnhandledPromiseRejectionWarning: Error: Retrieving posts failed
Edit the route handler to catch the model.getPosts()
promise rejecting
You should send a response with a 404
status code and a message of "Posts not found"
Unhandled exceptions are very dangerous on the server. They will cause the whole Node program to crash, preventing it from responding to more requests. This is actually a good thing—attempting to continue serving requests after an unhandled exception could lead to much worse issues, like saving incorrect data to a database or serving the wrong information to users.
This is why Node automatically stops your program on an unhandled exception. Unfortunately it does not do this for unhandled rejections (i.e. when a promise errors). This is why you see a warning when a promise rejects without a .catch
:
In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
It's a good idea to make your program stop on unhandled rejections too. You can do this by listening for an event on the global process
object:
The unhandledRejection
event will fire when a promise rejects without being caught, and your callback function will run. We log the error, then tell the Node process to stop with an "exit code" of 1 (which means an error occurred). This will stop your server processing any more requests.
Add this to your server.js
, then remove your .catch
from the rejection
handler function. Now when you visit http://localhost:3000/rejection and you shouldn't see the "unhandled rejection" warning in your terminal. Instead your server should crash and stop processing further requests.
Note: it's always better to handle the promise rejection properly in your route handler so you can send a response. This unhandledRejection
listener is a last-ditch strategy because errors always slip through the cracks.
Ideally your server shouldn't stay crashed: you want it to restart and continue handling requests. This has to be managed by something outside of the Node process.
If you have deployed your server to Heroku it will automatically try to restart your server if it crashed. If it crashes again it will wait up to 20 minutes before trying again, then keep waiting longer before each attempt. You can read about their crash restart policy.
If you're managing your own Node deployment it's common to use something like pm2
or systemd
to automatically restart the process after it crashes.
Learn how to build interactive websites using forms and Node
Forms are the building blocks of interactivity on the web. Until client-side JavaScript gained the ability to make HTTP requests (in the early 2000s) forms were the only way to send user-generated data from the browser to the server. They're still the simplest and most robust way to send data, since they work even if your JS fails to load, is blocked, or errors.
Download the starter files
cd
into the workshop/
directory
npm install
to install the dependencies
npm run dev
to start the server with . This is a helper that auto-restarts your server when you save changes
Before we get stuck into forms lets practice our Express basics. Open workshop/server.js
in your editor. You should see a server that listens on port 3333
. There's also an object containing data about different dogs imported from dogs.js
.
Add a route for the homepage (GET /
)
Return an HTML string containing a <ul>
Each <li>
in the list should contain a dogs name
Hint
You can use Object.values(myObj)
to get an array of all the values. You can then generate a dynamic list by looping over that array:
You could also combine array.map
and array.join
to create the string.
When you're done you should be able to visit http://localhost:3333 and see the list of dogs rendered.
GET
requestsBrowsers support two types of HTTP requests (without JS): GET
and POST
. When a user navigates (either by clicking a link or typing a URL into the address bar) the browser will send a GET
request, then render the response. There are also certain HTML tags that trigger GET
requests for a resource to display within the page, e.g. <img>
.
Forms can also make GET
requests. Here's an example form:
By default a form sends a GET
request to the current page (when submitted). It will find all the inputs within the form and add them into the "search" part of the URL (the bit after the ?
). Assuming this form was rendered on example.com
clicking the submit button would send this request:
Each input is added to the search string in this format: ${inputName}=${inputValue}
. If you don't add a name
attribute the input won't be submitted.
There's nothing special about the request: we could have achieved the same result by creating a link like this:
The advantage of a form is the search part of the URL is dynamic—it is typed by the user, not hard-coded into the HTML by the developer.
Forms like this are mostly used for implementing search functionality. The GET
method is for retrieving resources. It shouldn't be used for creating/updating/deleting things, since browsers treat GET
s differently (e.g. they cache them).
Let's add some search functionality to our dogs page. Express automatically parses the "search" part of the URL for each request. You can access this object at request.query
. For example our request above would result in a query object like:
Add a search form to the homepage (with a single input)
Retrieve the user-submitted value on the server
Filter the list of dogs based on the user-submitted value
Make sure the full list still displays if there's no search value
E.g. if the user searches for "o" the list should only include "rover" and "spot" (since they both contain the letter "o").
You can use string.includes
to check if a string contains a given substring. E.g.
Don't forget this is case sensitive!
When you're done you should be able to submit the form to filter the list of dogs on the page.
POST
requestsForms can also send POST
requests, which allows users to create or change data stored by the server. You can make a form send a POST
by setting the method
attribute.
Note: forms cannot use any other HTTP methods. This means we'll be using POST
for creating, updating and deleting things.
A POST
request doesn't include information in the URL. Instead it puts it in the request body. This means it's not directly visible to the user, and won't be logged or cached by things that store URLs. The information will be formatted in the same way as a GET
, it's just sent in a different place. E.g.
Let's add a form for submitting new dogs to the site. We aren't using a database to store our dogs persistently, so we'll just store new dogs by adding them into the dogs
object in-memory. This means the dogs will reset each time the server restarts.
Note: it's important to always redirect after a POST
request. This ensures the user only ever ends up on a page rendered via a GET
. Otherwise if the user navigated back to the results page their browser would resend the POST
and you'd get a double-submission. This is why lots of sites say "Don't click back or you'll be charged twice"!
Add a new route GET /add-dog
It should render another form with inputs for each property of a dog
Add a new route for POST /add-dog
It should use the Express body-parsing middleware to access the submitted body
Add the new dog to the dogs
object
Redirect back to the homepage so the user can see their new dog in the list
When you're done you should be able to visit http://localhost:3333/add-dog, submit the information for a new dog, then be redirected to the homepage and see that information in the list.
So far we've only had one form per page. Each form has just submitted to the default URL—the current one. However if you want to use multiple forms on a page they'll need different URLs to represent different actions.
For example we might want to be able to delete dogs from our homepage. This action might happen via the /delete-dog
URL. We can tell the form to send its request to this URL with the action
attribute:
But how will our /delete-dog
endpoint know which dog to delete? We need the request body to contain the name of the dog to remove, like this:
We could have the user type the name in, but that's not a great experience. It would be better if each "delete button" could be a separate form with a hard-coded "name to delete". That way the user can just click a button to send the delete request.
There are two ways to hard-code data into a form. You can use inputs with type="hidden"
. These aren't displayed to the user but will still be submitted to your server.
You can also set name
and value
attributes on directly on button elements. When that button is used to submit the form those values will be submitted in the request body.
It's like a little self-contained form. The only thing the user sees is the button to click.
Let's add delete buttons next to each dog in the list on the homepage. You can remove a dog from the dogs
object using the delete
operator. E.g.
Add a delete form next to each dog's name on the homepage
Each one should send a POST
to /delete-dog
with the name of the dog to remove in the body
Add a new route POST /delete-dog
It should get the name of the dog to remove from the request body
Use the name to remove the dog from the dogs
object
Redirect back to the homepage so the user can see the dog is gone
When you're done you should be able to click the delete button next to each dog and see that dog disappear from the list.
Our server.js
file is starting to get a little cluttered. We've got handlers and logic for several different routes, plus the code that starts our server listening. It's not too hard to follow right now, but as an application grows you'll want to split things up into separate files.
You can import and register route handler functions like this:
There are different philosophies on modularisation. Some people like dividing things up by the type of code. For example put all the route handlers in one place, all the database queries in another place, and all the HTML templates in another place. So you might have folders like handlers/
, database/
and templates/
. A single feature "add a dog" might be divided across all three folders.
Other people like to divide code up by features. For example put all the code related to a single feature (like "adding a dog") into a single file. So you might just have a routes/
folder containing addDog.js
. This file would contain everything required for that feature—the route handlers, data access and HTML strings all together.
Pick one of the methods above and move your route handlers out of server.js
. Don't forget to import/export everything!
Add a single extra route that can render any dog's page
It should respond with HTML containing all the info about that dog
Add a link for each dog on the homepage so you can click through to each page
Once that's done it would be a better experience if the user was redirected to the relevant dog page after creating a new dog. For example if they created a new dog named "Bilbo" they should be redirected to /dogs/bilbo
to see all the info about that dog.
Amend your POST /add-dog
handler to redirect to the relevant dog's page
It's important to check user-submitted information. It could be missing, incomplete, or even malicious. For example right now you can submit the "add dog" form with empty inputs, which results in the other pages looking broken.
Client-side validation can help (e.g. adding required
or pattern
to inputs), however it's easy to bypass (e.g. by editing the elements in dev tools).
You should always validate your data on the server, since that's the only place you can trust.
Amend your POST
handlers to check your data is valid
If the data is not valid redirect the user to a generic error page
Create a generic error route that tells the user something went wrong
It would be a better user-experience to send the user back to the same page, but highlight which inputs had validation errors. Unfortunately that's a little more complex. HTTP requests are "stateless", which means we can't distinguish a normal GET /add-todo
from a redirect to GET /add-todo
after an error.
We can use cookies to store information between requests. We could store the validation errors in a cookie before redirecting, then use that info to render error messages in the form. We'll be looking at cookies in a later workshop, but feel free to attempt this now if you want a challenge.
Write custom Node scripts to automate tasks in your terminal
Node isn't just used for HTTP servers—it's a fully-fledged programming language that can do almost anything. Let's see how we can use it to create useful scripts we can run in our terminal.
We could recreate the built-in ls
program using Node's filesystem module:
We're importing "fs/promises"
because Node's standard "fs"
module uses callbacks. We get the directory the script was run from using process.cwd()
("current working directory"). Finally we use fs.readdir
to get the names of every file in the directory and log them.
You can run this script using node ./path/to/ls.js
. It should list the contents of whatever directory you're currently inside in your terminal.
We're currently running it by passing the file to our node
program. However we can make the script directly executable by doing two things:
First we must add a ["shebang"]() to the top of the file. This is a special comment that tells the terminal which program it should use to run a script.
Second you need to change the file's permissions to make it "executable". You can do so with:
You can now run this script with ./path/to/ls.js
. You actually don't even need the .js
extension anymore, since the shebang tells your terminal that it's a Node file. So you could rename it to just ls
and run it with ./path/to/ls
.
Write a Node script that creates a new HTML file containing all the boilerplate code you normally need. The filename should be passed in as an argument. E.g. running this command:
should create a new file named hello-world.html
containing:
Practice using promises to avoid "callback hell" in asynchronous JavaScript
Running functions in sequence (one after another) is a common requirement. For example, triggering animations in order, or requesting some data then sending that response on to another API.
If your code is synchronous it's easy to make it run in order: that's what JavaScript does automatically. Each line of code runs one by one:
However once you have asynchronous code this gets harder to manage. You don't know how long each bit of code will take, so you have to make sure each line of code waits for the previous one.
We've previously :
However this quickly gets difficult to manage as each new callback introduces another level of nesting.
Promises make it easier to run code in sequence. A promise object's .then
method returns a new promise that resolves with whatever value you returned from the callback you passed in.
Here we have a promise that will eventually resolve with the value 1
:
We can access this value using the promise's .then
method:
Since .then
returns a new promise we can assign it to a variable and use it again:
Here we wait for onePromise
to resolve with 1
, then multiply that by 5
and return the result. This creates a new promise that will eventually resolve with 5
. We can then access this value by using the .then
method of this second promise.
Since each .then
returns a new promise object we can avoid all the extra variables and chain the .then
methods directly:
This example is a bit silly since multiplication is synchronous—we can just use value * 5
directly without the second then
. However since .then
s always return new promises we can chain asynchronous operations together to avoid nesting our callbacks.
Imagine we had another function that multiplied numbers after 2 seconds:
Here we're starting to recreate our "callback hell" from the traffic lights example. Each new asynchronous operation means nesting a callback one level deeper.
However since each .then
returns a promise we can return our fivePromise
promise and access in it the next .then
:
Since all we do with the fivePromise
variable is return it we can skip defining it and simplify our code to:
The magic part is that we can return sync or async operations from a .then
—promise objects don't care what kind of value is inside them. The next .then
in the chain will always wait for the previous value to be ready.
You're going to recreate the traffic lights from the callback workshop, but using promises to avoid nesting your callbacks.
Download the starter files and open challenge-1/index.html
The pre-defined wait
function is like setTimeout
, except it returns a promise that resolves after waiting the specified number of milliseconds
Use wait
to write a light
function. This should:
Take a colour string argument
Wait 1 second then log this string
Use light
to log a sequence of traffic light colours with a one second pause between each
E.g. "green", "amber", "red", "amber", "red", "green", "finished"
Try not to let your callbacks go beyond a single level of nesting!
You probably won't be programming traffic lights using JavaScript, so let's try a more realistic example.
Open challenge-2/index.html
Use fetch
to request data from "https://pokeapi.co/api/v2/pokemon/pikachu"
Once your code has the response it should grab the species.url
property and make a new request to that
Once your code has that response it should grab the shape.url
property and make a final request to that
Log the final response body. It should look something like this:
Try not to let your callbacks go beyond a single level of nesting!
Learn how to use Node and npm to run JS on your machine
Node is a version of JavaScript that works outside of a web browser. It was created so web developers who already knew JS could use the same language to write HTTP servers, CLI programs and more.
Node is basically the JS engine from the Chrome web browser, plus some extra features for things browsers cannot do, like making HTTP servers or accessing files on the computer.
Since Node uses the JS language it has the same syntax, keywords and features as JS in the browser. However (just like browsers) different versions of Node support different new language features. Something that was added to JS in Chrome may not be available in Node yet (and vice versa).
The main difference to browser-JS is that Node has no window
or document
global objects (and so none of the things inside them like querySelector
), since those concepts only make sense in a browser.
You'll need to have Node installed on your computer to follow this workshop. Check whether it is already installed by running this in your Terminal:
You should see a version number printed. If you get an error that means Node isn't installed. Follow then try again.
Our programme relies on some features that were only added in Node version 18 (the current version), so if you have an older version than that you should install the newer one with:
The Node installation on your computer comes with a command-line program called node
. This allows you to use the node
command in your Terminal to run JS code.
The quickest way to try Node out is with the "REPL". This stands for "read, eval, print, loop" and is a quick playground for running code (most languages come with one).
Run this command in your Terminal to start the REPL:
You should see something like:
You can type JS code in here, then hit "Enter" to execute it (just like a browser console).
The REPL is mostly for playing around. To write a real program you need .js
files. You can tell Node to run a JS file by passing its path as an argument:
Node will parse the JS code in the file, execute it, and display anything logged in your terminal.
Node comes with a "package manager" called npm. This is a way to easily install and use code written by other people. This is generally more robust than just copy/pasting code from the internet.
The npm company maintain a registry that anyone can upload code to, containing thousands of 3rd party modules. They also have a command-line program for managing those modules in your project. This comes with Node, so you should already have this CLI available:
package.json
fileNode projects usually have a configuration file called package.json
. This is a special file that lists information about the project, including any dependencies. These are modules installed from npm that your project uses.
npm init
This command will "initialise" your project by asking you some questions then creating the package.json
.
You can pass the -y
flag to skip all the questions and create the package.json
with the defaults. You can edit the JSON file by hand later if you need to.
npm install
This is how you install 3rd party modules to use in your code. For example to install the figlet
module:
npm will list the module in the "dependencies"
field in your package.json
:
Now when another developer needs to work on your project they can clone your repo then run just this command:
npm will read the dependencies listed in the package.json
and automatically download all the 3rd party modules required to run the project.
npm will also create a directory named node_modules
and put all the 3rd party code in there. This will be quite large, since modules you install can have their own dependencies (and those dependencies can depend on other modules...). Since this directory gets so big it's common to add it to the .gitignore
file.
Development dependencies
Some 3rd party modules are only used for development purposes. E.g. a testing library or a linter. You can mark a module as a dev dependency with the -D
flag when you install:
This will put it under the "devDependencies"
field in the package.json
. This helps during deployment—it's quicker not to install a bunch of modules that aren't actually needed for your production server.
Global modules
You can also install modules "globally" on your computer using the -g
flag. This makes them available to use anywhere in your terminal, and so is sometimes used as an alternative to Homebrew or apt-get
.
You shouldn't use global modules in your Node apps, since they aren't listed in the package.json
and so won't be installed automatically if another developer clones your repo and runs npm install
.
It's common for modules you install to have command-line programs you can run in your terminal. E.g. the popular ESLint linter installs a CLI so you can check your code by running a command like eslint add.js
.
npm installs dependency CLIs into node_modules/.bin/
. This means you can run the figlet
CLI we just installed in our terminal like this:
However this is pretty awkward to type, especially if it's a command we need to use a lot (like "start the dev server"). Luckily npm scripts make this nicer.
npm automatically creates a field called "scripts"
in your package.json
. These are shortcuts for different tasks you might want to do while developing your app. They're like per-project command-line aliases.
npm will automatically add ./node_modules/.bin
to the path of any command you use in these scripts. So you could add a "greet" script like so:
You can run npm scripts in your terminal with npm run <name>
. So in this case:
Learn the basics of securely storing user passwords on your Node server.
We'll look at why you shouldn't store passwords as plaintext, what hashing and salting are, and how to use the BCrypt algorithm in Node.
Download starter files and cd
in
Run npm install
to install dependencies
Run npm run dev
to start the development server
Open in your browser
The server has five routes:
GET /
: homepage with links to /sign-up
and /log-in
GET /sign-up
: form to create a new user
POST /sign-up
: new user form submits data to here
GET /log-in
: form to sign in to an existing account
POST /log-in
sign in form submits data to here
Instead of a real database there's a hacky custom thing in database/db.js
that stores data in a JSON file (don't worry about trying to read this unless you're curious). This is both to avoid the complication of setting up a real DB, and so you can see the data getting updated as you use the site. You'll see a database/db.json
file created when you start the server for the first time.
The POST /sign-up
handler stores the new user details in the DB. The POST /log-in
handler searches the DB for a user with a matching email, then compares the submitted password with the stored user's password. If they match the user is "logged in".
You should see the user you just created in there. Unfortunately you can see your password stored in plaintext. This means anyone with access to this database can read it.
There are a few problems with this:
You (or a future employee of your company) know all users' password
Passwords are generally re-used for other websites
If a hacker steals your database they immediately know all users' passwords
We have a problem: storing the password as plaintext is bad, but we need to be able to compare a submitted password to a saved one in order to verify users and log them in. This is where hashing is useful.
Hashing is when you use a mathematical process (algorithm) to convert a string into a different one. Hashes are:
One-way: it should be impossible to reverse the process.
Deterministic: hashing the same string always gives the same result.
Unique: hashing a different string should never create the same result.
For example hashing "hunter2" with the popular "sha256" algorithm always gives us "f52fbd32b2b3b86ff88ef6c490628285f482af15ddcb29541f94bcf526a3f6c7". There is no way to turn that hash back into "hunter2" again, so it's safe to store. No other password will create an identical hash.
When a new user signs up we hash their password and store the hash in the database. When they next log in we ask for their password again, hash it again, then compare that hash to the one we have stored. The hashes will only match if the input password was the same both times.
Here's how we'd create the initial hash using the built-in Node crypto
module:
We have to specify which algorithm we want to use ("sha256"
) and what encoding the result (or "digest") string has (hexadecimal).
Here's roughly how we would verify a user when they signed in again later:
First we need to stop saving users' passwords in plaintext.
Edit the post
function in workshop/handlers/signUp.js
We want to hash the submitted password using the built-in crypto.createHash()
method
Store the hash instead of the plaintext password in the database
Create a new user at /sign-up
: you should see a random string password appear in db.json
Then we need to make our logging in comparison work.
Edit the post
function in workshop/handlers/logIn.js
We need to hash the submitted password before we compare it to the stored hash
You should be able to log in as the user you just created
There are still some issues with our hashed passwords. The only way for a hacker with a stolen database to figure out the passwords is with a "brute force" attack. This is where they use software to automatically try a huge list of possible passwords, hashing each one then comparing it to the passwords in the database.
A good hash algorithm is deliberately quite slow. This limits a hacker who has stolen a database—the brute force attack will take a long time since they'll have to try thousands of passwords. Hackers can speed this process up by using "rainbow tables". This is a pre-hashed list of common words so the hacker doesn't have to hash each password to find a match in the database. For example instead of:
They can skip the hashing part and just try the hashes directly:
We can prevent the use of rainbow tables by "salting" our passwords. This means adding a random string to the password before hashing it. That will ensure our password hashes are unique to our app, and so won't show up in any rainbow tables.
For example "cupcake" hashed using SHA256 is always "b0eaeafbf3..."
. That means the hash can be published in rainbow tables. If we instead add a salt to the password to make "kjnafn9nbjka2kjn.cupcake"
then the hash will be "6bc8571635..."
, which won't appear in any rainbow table.
Add a long string to the password before you hash it in workshop/handlers/signUp.js
Add the same string to the password before you hash it in workshop/handlers/logIn.js
so you can correctly compare it to the hash in the database
We still have a security flaw here: we're using the same salt for every password, which means our hashes won't be unique. If you create two new users with the same password you should see the same hash in db.json
. This is a problem because as soon as a hacker cracks one hash they'll have access to all the duplicate passwords.
We can solve this problem by generating a random salt for each new user. This will ensure that each hash is totally unique, even if the password is the same as another user's.
However we need the salt when the user logs back in, in order to generate the correct hash and verify their password. This means we must store the random salt in the DB along with the password.
This is a fiddly process that is easy to mess up. Instead of implementing it ourselves we'll rely on a battle-tested library to do it for us.
BCrypt is a popular hashing algorithm. It's designed specifically for passwords, and (in computer terms) is very slow. This isn't noticeable to users but makes a brute-force attack much more difficult for a hacker.
bcryptjs
challengeSince BCrypt is supposed to be slow the implementation is asynchronous. So the library's methods return promises. It has a method for generating a hash. This takes the string to hash and a number representing how strong the salt should be (the higher the number the longer it will take to hash):
It also has a method for comparing a string to a stored hash. This takes the (unhashed) string to compare as the first argument and the hash as the second argument:
BCrypt automatically stores the salt as part of the hash, so you don't need to implement that yourself.
Run npm install bcryptjs
to install the library
Use bcrypt.hash()
to hash your password before saving to the DB in signUp.js
Use bcrypt.compare()
to compare the submitted password to the stored hash in logIn.js
Learn the basics of creating DOM elements using JSX and React components
React makes dealing with the DOM in JavaScript more like writing HTML. It helps package up elements into "components" so you can divide your UI up into reusable pieces.
Interacting with the DOM can be a frustrating experience. It requires lots of awkward lines of code where you tell the browser exactly how to create an element with the right properties.
Even if we create our own function to handle some of the repetitive parts it's a little hard to read:
This is frustrating because there is a simpler, more declarative way to create elements: HTML.
Unfortunately we can't use HTML inside JavaScript files. HTML can't create elements dynamically as a user interacts with our app. This is where React comes in:
This variable is a React element. It's created using a special syntax called that lets us write HTML-like elements within our JavaScript.
The example above will be transformed into this normal JS:
This function call returns an object that describes your element:
React builds up one big tree structure of all these element objects that represents your entire app. It then uses this tree to create the actual DOM elements for you. (This is called the virtual DOM, but you don't need to worry about that right now)
It can be helpful to remember that the HTML-like syntax is really normal function calls that return objects.
JSX supports inserting dynamic values into your elements. It uses a similar syntax to JS template literals: anything inside curly brackets will be evaluated as a JS expression, and the result will be rendered. For example:
You can do all kinds of JS stuff inside the curly brackets, like referencing other variables, or conditional expressions.
You can put any valid JS expression inside the curly brackets. An expression is code that resolves to a value. I.e. you can assign it to a variable. These are all valid expressions:
This is not a valid expression:
if
blocks are statements, not expressions. The main impact of this is that you have to use ternaries instead of if
statements inside JSX.
React elements aren't very useful on their own. They're just static objects. To build an interface we need something reusable and dynamic, like functions.
A React component is a function that returns a React element.
A React element can be a JSX element, or a string, number, boolean or array of JSX elements. Returning null
, undefined
, false
or ""
(empty string) will cause your component to render nothing.
Components are useful because JSX allows us to compose them together just like HTML elements. We can use our Title
component as JSX within another component. It's like making your own custom HTML tags.
When we use a component in JSX (<Title />
) React will find the corresponding Title
function, call it, and use whatever element it returns.
A component where everything is hard-coded isn't very useful. It will always return the exact same thing, so there's almost no point being a function. Functions are most useful when they take arguments. Passing different arguments lets us change what the function returns each time we call it.
JSX supports passing arguments to your components. It does this using the same syntax as HTML:
React component functions only ever receive one argument: an object containing all of the arguments passed to it. React will gather up any key="value"
arguments from the JSX and create this object.
This object is commonly named "props" (short for properties). Using an object like this means you don't have to worry the order of arguments. So in this case our Title
function will receive a single argument: an object with a "name" property.
You can use these props within your components to customise them. For example we can interpolate them into our JSX to change the rendered HTML:
Now we can re-use our Title
component to render different DOM elements:
Since JSX is JavaScript it supports passing any valid JS expression to your components, not just strings. To pass JS values as props you use curly brackets, just like interpolating expressions inside tags.
It would be nice if we could nest our components just like HTML. Right now this won't work, since we hard-coded the text inside our <h1>
:
JSX supports a special prop to achieve this: children
. Whatever value you put between JSX tags will be passed to the component function as a prop named children
. You can then access and use it exactly like any other prop.
Now this JSX will work as we expect:
This is quite powerful, as you can now nest your components to build up more complex DOM elements.
You may be wondering how we get these React components to actually show up on the page.
React consists of two libraries—the main React
library and a specific ReactDOM
library for rendering to the DOM (since React can also to render Virtual Reality or Native mobile apps).
We use the ReactDOM.render()
function to render a component to the DOM. It takes an element as the first argument and a DOM node as the second.
It's common practice to have a single top-level App
component that contains all the rest of the UI.
Time to create some components! Open up challenge.html
in your editor. You should see the components we created above. Open this file in your browser too to see the components rendered to the page.
Create a new component called Card
. It should take 3 props: title
, image
and children
, that render into h2
, img
and p
elements respectively.
Replace the p
in the App
component with a Card
. Pass whatever you like as the 3 props (although here's an image URL you can use: https://source.unsplash.com/400x300/?burger
).
Screenshot of this page with no CSS at all
This article about is a great overview of this topic if you're interested.
Screenshot of Voiceover's headings menu for this page
There's some useful info in .
Since request bodies are sent in lots of small chunks (as they can sometimes be very large) our server doesn't get it all in one go. This means you must use the built-in Express middleware for parsing request bodies. You can refer back to our to see exactly how.
It would be nice if each dog had its own page that showed all the information about it. For example GET /dogs/pongo
would show information about Pongo. You can achieve this with .
APIs often require you to make request to multiple different URLs to get all the data you need. For example the returns Pokémon objects with properties containing followup URLs with extra information (since it would make for a very big initial response if they included everything).
Once you've started the dev server open in your browser. You should see a sign up form—use this to create an account, then check the workshop/database/db.json
file. This is simulating a real database so we can see how our user data is stored.
We'll be using the library (avoid the bcrypt
one, which has C++ dependencies and doesn't work on some systems). This library has methods for hashing/salting and comparing just like we did manually above.
Learn how to rewrite older React classes to use the newer hooks
Hooks like useState
, useEffect
(and more) were added to React a couple of years ago. Before that stateful components had to be created using JavaScript classes. It's important to be able to read class-based code since you might encounter it out in the world.
Classes were added to JS with ES6. They're a special syntax for creating reusable objects with methods and properties.
They can also "extend" other classes to inherit their properties.
Don't worry too much about classes—they're rarely used in React anymore, and even when they were hardly any of their features were used.
React class components are created by extending the React.Component
base class:
The render()
method is the equivalent of a function component body. You return React elements from here to render them to the DOM.
We can set a class property named state
to tell React to keep track of some values. This property is always an object.
We can access the state object via this.state
.
If we want to update state we call this.setState()
and pass in a new object. React will merge this object with the existing state:
We can also store methods as properties on the class so they're reusable:
this.setState()
can take a function instead of an object if you need to access the previous state value (the same as with React.useState()
).
Classes don't have a built-in way to deal with side-effects. Instead you have to hook into their "lifecycle" using specially named methods. These function are called at various points by React as it creates your component, puts it into the DOM, updates it or removes it.
For example to run some code when React is ready to render your component to the page we use componentDidMount
:
To run some code when your component updates (i.e. is passed new props or setState
is called) you can use componentDidUpdate()
. To clean up after your component (i.e. cancelling timers or removing global event listeners) you can use componentDidUnmount()
. There are quite a lot of these and you probably won't need them all.
Download the starter files and cd
in
Run npm install
npm test
to start the test watcher
Rewrite src/Counter.js
to use hooks instead of classes
Rewrite src/Keyboard.js
to use hooks instead of classes
Rewrite src/Pokemon.js
to use hooks instead of classes
Keep all the tests passing!
Learn how to handle errors and submit data with the fetch method
The browser's fetch
method is deliberately low-level. This means there are certain things you'll almost always need to do to make requests in a real application.
fetch
is only concerned with making HTTP requests. From this perspective as long as it receives a response it was successful, even if that response says something like 500 server error
. Most of the time in your application code you want to treat non-200 status codes as errors.
Open workshop.html
in your editor
Add a fetch
call to "https://echo.oliverjam.workers.dev/status/404"
(this always returns a 404)
Add a .then()
and .catch()
. Which of these runs? What does the response look like?
We need to handle HTTP responses we don't want. We can do this by checking the response.ok
property. This will be true
for successful status codes (like 200
) and false
for unsuccessful ones (like 404
or 502
).
Edit your .then()
to check the response's ok
property
If the response is not okay throw a new error with the status
property of the response
Now does your .catch()
run?
fetch
allows us to make any kind of HTTP request we like. So far we have made GET
requests, but those won't allow us to submit data to a server. To do that we'll need to configure some options by passing a second argument to fetch
. E.g.
This options object can include lots of properties. Here are some useful ones:
method
: to use methods other than GET
headers
: to send extra info about the request. e.g. if we're submitting JSON we should set the "content-type"
header to "application/json"
body
: to send information to the server. If we're sending JSON we also need to JSON.stringify
the data.
Edit your fetch
to send a POST
request to "https://echo.oliverjam.workers.dev/json"
Send a JSON body containing an object with whatever properties you like
Don't forget the "content-type"
!
So far we've only hard-coded our requests. In reality they're usually triggered by a user submitting a form or clicking a button. There are several different ways we can access form data in our JavaScript.
Forms are the semantically correct element for receiving user input. We should use them even when we're using JS to handle the request (rather than relying on the native browser submission).
We can add a handler for the submit event like this:
event.preventDefault()
will stop the browser trying to send the request for you. We want to handle the request with fetch
instead.
In order to send our request we have to get hold of the values the user entered. There are a few ways we could do this.
querySelector
We can use querySelector
to directly access each input element, then get its value. For example document.querySelector("#username").value
.
Create a form with two inputs and a submit button
Add a "submit"
event handler to the form (don't forget preventDefault
)
Use querySelector
to get each input's value
Use fetch
to POST
the data as JSON to the same URL as before
Log the response you get from the server
new FormData()
There is a built-in API that mirrors a form's native behaviour. We can use new FormData(myForm)
to create a FormData
interface. This is what the form would send if we didn't call preventDefault()
, and contains all the input values.
If we want to submit this as JSON we need to turn it into a normal object. You can do this with Object.fromEntries(data)
. Note: fromEntries()
is relatively new and isn't supported in older browsers.
Edit your previous solution
Use new FormData()
to get all the input values
Turn the FormData into an object to submit
We're going to make a Pokémon search page using the PokéAPI.
Create a form with a search input and submit button
When the form is submitted request the Pokémon the user typed from "https://pokeapi.co/api/v2/pokemon/NAME"
If the request succeeds show the Pokémon's name and sprite
If the request fails show a relevant error to the user
If you have extra time try using some of the other data in the response body to show e.g. the pokémon's types or stats. Write some CSS to make it pretty!
Solution preview
Learn how to log users in using session cookies
This workshop will show you how to combine password-based authentication with cookie-based sessions to keep users logged in to your site.
Download the starter files and cd
in
Run npm install
The starter files include two scripts to help with database setup.
Create a new Postgres user and database using:
This script will also create a .env
file containing the DATABASE_URL
environment variable, so your server knows how to connect to the local DB.
Insert example data into the database using:
This will recreate all the tables from scratch each time you run it, so it can be handy to "reset" everything if you mess up during the workshop.
Finally you can start the server:
Take a moment to look at the existing code. The server has routes for signing up and logging in. The GET
routes render forms, but the POST
routes don't do anything but redirect.
Important: a COOKIE_SECRET
environnment variable is set in the .env
file and used to configure the cookie-parser
middleware. When you deploy a server to Heroku you'll need to create a long random string and set it in your app's "Config vars" in Settings.
The database created in database/init.sql
contains two tables: users
and sessions
. We'll be storing new users who sign up in users
, and currently logged in users in sessions
.
The sessions
table has a data
column that is the JSON
type. This means it can store generic blobs of unstructured data, which is perfect for a session since we don't know exactly what we want to put in there in advance.
You're going to implement the sessions-based authentication functionality. You'll work step-by-step to create each part of the code as a separate function, then bring all the parts together to make the server work.
There are unit tests for each part of the workshop. You can run these to find out if you've implemented the functions correctly. For example:
The database-related code is separate from the rest of the server logic, in database/model.js
. There are already some functions for accessing data in this file.
The model is missing a way to insert new sessions into the database, so you need to write this function.
Write a createSession
function that takes a session ID and a data object, inserts them into the sessions
table, and returns the session ID. For example:
User sign up is a three-step process:
Get the submitted user data, hash the password, insert the user into the DB
Create a new session ID, then store the user data in the sessions
table (so they're logged in)
Set a cookie containing the session ID so they stay logged in on future requests
The auth.js
file is going to contain all the authentication related code. You'll need to write two functions in here, one to create a user and one to save a session.
Write a createUser
function in auth.js
. It should take an email, password, and name as arguments, hash the password, then store the user in the database, returning the saved user. For example:
Write a saveUserSession
function in auth.js
. It should take a user object, generate a random session ID, then store the user data in the sessions
table. For example:
Hint: you can generate a random, long session ID using Node's crypto
module:
Once those functions are working you need to use them in the /sign-up
route:
Use auth.createUser
and auth.saveUserSession
in routes/signUp.js
. Create the user, then save the session, then store the session ID in a cookie before redirecting.
You can use the auth.COOKIE_OPTIONS
export when you set the cookie. You're going to be setting cookies in multiple places, so it's a good idea to centralise the config.
User log in is a very similar three-step process:
Get the submitted user data, hash the password, check the hash matches the one you have stored for that user
Create a new session ID, then store the user data in the sessions
table (so they're logged in)
Set a cookie containing the session ID so they stay logged in on future requests
Only the first step is different for this route, so you'll need to write just one more function in auth.js
.
Write a function verifyUser
that takes an email and password as arguments, then gets the stored user from the DB using the email, then uses bcrypt.compare
to verify the password. If the passwords match return the user object, otherwise throw an error. For example:
Once this function is working you need to use it in the /log-in
route:
Use auth.verifyUser
, auth.saveUserSession
and auth.COOKIE_OPTIONS
in routes/logIn.js
. Verify the user's password, then save the session, then store the session ID in a cookie before redirecting.
The POST /log-out
route should delete the stored session from the DB, clear the session cookie and redirect back to the home page.
Write a deleteSession
function in model.js
that takes a session ID and deletes the matching row from the sessions
table, returning nothing. For example:
Use model.deleteSession
in the routes/logOut.js
. The handler should delete the session and clear the cookie.
Practice your understanding of variable scoping by debugging a JS app
Scope is the context a variable is available in. It defines what variables can be used in each part of your code. There are two kinds of scope: global and local.
Everything at the "top-level" of your code is global. This means anything outside of functions or "blocks" like if
statements.
The global scope is also shared across all normal script tags. This can be confusing as you can use variables that don't appear to exist in that JS file. For example:
Variables inside of functions or "blocks" are locally scoped. A block is created by curly brackets, like if
statements.
Local variables are not visible or usable outside of that function or block.
A local scope always has access to the scopes above it.
Here the if
block can see the x
variable as that is defined in the scope "above". However the console.log(err)
will fail as the err
variable is defined inside a block—a "lower" scope.
Think of your code as a series of nested one-way mirrors: code can see out into the scopes above, but not further down.
Variables defined with var
are not block scoped, whereas those defined with let
and const
are. var
is still function scoped though.
Generally you should always prefer let
and const
, since it can be confusing for variables to be accessible outside of a block.
Open starter-files/challenge/index.html
in your browser
You should see a JS error in the console.
Fix this error, and every other error that shows up
Don't worry about understanding all of the code, just try to make it work. This is mostly an exercise in debugging, so keep persisting until the app works like the solution:
Learn how to use the useState and useEffect hooks to create dynamic interactions in React
React is designed to build dynamic apps with lots of interaction. A common difficulty with apps like this is keeping the DOM up-to-date as the user interacts. React has two concepts to help keep this manageable: "state" and "effects".
State is data that changes while your application is running. This might be in response to user actions, or after a fetch
request finishes.
In React all stateful values are stored in JS as special variables. We can render our UI based on these variables—when they change React will automatically re-run the component function and update the DOM to reflect the new state value.
Imagine we have a counter component. When the button is clicked we want the count to go up one:
We need some way to make our Counter
function run again if this value changes.
The React.useState
method can be used to create a "stateful" value. It takes the initial state value as an argument, and returns an array. This array contains two things: the state value, and a function that lets you update the state value.
It's common to use array destructuring to simplify this:
The setCount
function lets us update our state value and tells React to re-run this component. E.g. if we called setCount(10)
React will call our Counter
component function again, but this time the count
variable would be 10
instead of 0
.
This is how React keeps your UI in sync with the state.
We have a function that will let us update the state, but how do we attach event listeners to our DOM nodes?
You can pass event listener functions in JSX like any other property. They are always formatted as "on" followed by the camelCased event name. So "onClick", "onKeyDown", "onChange" etc.
In this example we are passing a function that calls setCount
with our new value of count
.
Time to add some state! Open up challenge-1.html
in your editor. You should see the Counter
component we just created. This is an example; you can delete it if you want.
Create a new component called Toggle
. It should render a button that toggles a boolean state value when clicked. It should also render a div containing its children, but only when the boolean state value is true.
Example usage:
React is designed to make it easy to keep your application in sync with your data/state. Component functions render DOM elements and keep them in sync with any state values.
But most apps need more than just a UI—there are also things like fetching data from an API, timers/intervals, global event listeners etc. These are known as "side effects"—they can't be represented with JSX.
We need a way to ensure our effects reflect changes in state just like our UI does.
React provides another "hook" like useState()
for running side-effects after your component renders. It's called useEffect()
. It takes a function as an argument, which will be run after every render (by default).
Let's say we want our counter component to also update the page title (so the count shows in the browser tab). There's no way to represent this update using the JSX our component returns. Instead we can use an effect:
React will run the arrow function we passed to useEffect()
every time this component renders. Since calling setCount
will trigger a re-render (as the state is updated) the page title will stay in sync with our state as the button is clicked.
By default all the effects in a component will re-run after every render of that component. This ensures the effect always has the correct state values.
If your effect does something expensive/slow like fetching from an API (or sorting a massive array etc) then this could be a problem.
useEffect()
takes a second argument to optimise when it re-runs: an array of dependencies for the effect. Any variable used inside your effect function should go into this array:
Now our effect will only re-run if the value of count
has changed.
Sometimes your effect will not be dependent on any props or state, and you only want it to run once (after the component renders the first time). In this case you can pass an empty array as the second argument to useEffect()
, to signify that the effect has no dependencies and never needs to be re-run.
For example if we wanted our counter to increment when the "up" arrow key is pressed:
We add an event listener to the window
, and pass an empty array to useEffect()
. This will keep us from adding new event listeners every time count
updates and triggers a re-render.
Some effects need to be "cleaned up" if the component is removed from the page. For example timers need to be cancelled and global event listeners need to be removed. Otherwise you'd have a bunch of code running in the background trying to update a component that doesn't exist anymore.
If you return a function from your effect React will save it and call it if the component is removed from the page. React will also run it to clean up when a component re-renders (before the effects run again).
Let's clean up after our effect example from above:
The cleanup
function we return will be called if the component unmounts (is removed from the page). That will ensure we don't keep running an unnecessary event listener and trying to update state that doesn't exist anymore.
We're going to enhance our Toggle
component from Part 3. You can either keep working in the same file or open up challenge-2.html
to start fresh.
Edit the Toggle component so that the page title (in the tab) shows whether the toggle is on or off.
Then create a new component called MousePosition
. It should keep track of where the mouse is in the window and render the mouse x and y positions.
Put MousePosition
inside your Toggle
so you can show and hide it. This is how your final App
should look:
Learn about test-driven development and practice JS array methods
Test-driven development (TDD) is a methodology where you write tests before you write any code. This forces you to think through exactly how your code should behave. It's kind of like planning an essay before you start writing it. The iterative process of writing each test is supposed to help with solving a problem too.
TDD generally follows the "red, green, refactor" cycle.
Red
Write a test that fails. This is important: if you never see your test fail you might have a false positive (a test that passes even if your code is broken).
Green
Write as little code as possible to make the test pass. Make sure you don't break any existing tests.
Refactor
Change your code to improve it (if necessary). You have passing tests to tell you if you break anything.
Repeat
Go through the cycle until you think you have a complete working solution
Let's run through the process by creating a double
function using TDD. First we write a failing test:
Then we write as little code as we need to make the test pass:
This will feel a bit contrived for a problem where we already know what the final code should be. The idea is not to try and solve the whole problem in one go—TDD is a way to help you solve a harder problem by iterating through solutions.
Then we refactor, if needed. Since we can't make this any simpler let's keep going and repeat the cycle. We need another failing test:
Once we see that fail we can amend our function to make it pass:
Once the test passes we can try to refactor our function to remove repetition. Instead of listing every possible input/output, we can see that we need to return the input multiplied by two each time.
This solution looks complete, so we can end the TDD cycle here. It might be worth adding more tests for edge-cases (e.g. what happens when you don't pass any argument), but TDD has helped us solve the problem itself.
If you're confused about the TDD process at the end of the workshop there's a fully explained starter-files/solution/tdd-explanation.js
version of the solution that walks through the process step-by-step.
We're going to re-create some useful JavaScript array methods using TDD. For example if we're re-creating the array.map
method we should use other JS features (like for
loops) to create a function that does the same thing as .map
, without using .map
itself.
For each method you should use TDD to write tests first, then write the actual code. Work in pairs and alternate: person 1 writes a test, then person 2 makes it pass. Then person 2 writes the next test and person 1 makes that pass.
Clone this repo
Open index.html
in your browser
Alternate writing tests and code in index.test.js
and index.js
You can see test results in the console
map
Use TDD to write your own map
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
There is one passing test and one failing test to get you started.
filter
Use TDD to write your own filter
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
every
Use TDD to write your own every
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
some
Use TDD to write your own some
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
find
Use TDD to write your own find
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
reduce
The function is called with the current value of the accumulator and the current element. Whatever you return from the function is used as the accumulator value for the next iteration. After the final element the final accumulator value is returned.
Use TDD to write your own reduce
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
flat
Use TDD to write your own flat
function that behaves like the built-in one. The only difference should be that yours takes the array as the first argument:
Hint: recursion or while
loops will be helpful.
Learn how to build dynamic interactions using form elements in React
We're going to build a simplified food delivery menu page. It'll have a list of dishes plus a form to filter them. The final result should look something like this:
Don't worry, we'll work our way there step by step.
Download starter files and cd
in
Run npm install
Run npm run dev
to start the dev server
Since React uses non-standard syntax (JSX) it requires some processing before it can run in the browser. We'll use Vite for this. Vite also provides a nice dev server that will auto-reload when you change files.
Open workshop/index.jsx
in your editor. This is where we render our React app to the DOM. You can see that we have a top-level component named App
. Open App.jsx
to see what's currently being rendered.
JSX supports multiple child elements like this:
This is the same as listing those child elements in an array, like this:
This isn't very ergonomic to write by hand, but it comes in handy when you need to render a dynamic list. We can generate an array from some data and render it:
It's common to inline the .map()
(although using a separate named variable is fine if you find it clearer):
We're passing a special prop called key
to the top-level element in our array. This allows React to keep track of where each element is so it doesn't mess up the order. key
should be unique and not change when the array order does. React will warn you if you forget this.
Uncomment the line importing "../data.js"
. This is an array of objects, each representing a dish in our restaurant. Use .map
to render all of them to the page inside the ul
.
Take a look at what data you have available for each dish and try to render it all. You should end up with something like this:
We want to be able to filter the list of dishes by minimum and maximum price. To do this we'll need to create two range inputs.
It can be a good idea to group and label related elements using the fieldset element.
Range inputs support constraining their values with the min/max/step attributes.
If we want these inputs to filter the list of dishes we'll need some state they can both use. For example we can create a state value called min
that we update whenever the range input changes. Later we will be able to use the same min
value to filter the dishes. By sharing the state value they'll always be in-sync.
Add the second range input for the maximum price. You'll need another state variable to control the input's value.
You should end up with something like this:
Now we need to filter our dish list based on the price state.
You should have something like this:
Our App
component is starting to get a bit unwieldy. We've got a single function containing all our state, plus two totally separate sections of the page. Let's try splitting it up into a couple of smaller components.
Create two new files: DishList.jsx
and PriceFilter.jsx
. DishList.jsx
should contain the <ul>
of dishes; PriceFilter.jsx
should contain the fieldset
with the range inputs.
Remember these components need to share the same state. This means we can't define it down in each child component—the state needs to live in their shared parent (App
) and be passed down to each child via props.
We also want to filter our dishes by category. This is a good use-case for a group of radio inputs, since the categories are mutually exclusive.
Create a new file called CategoryFilter.jsx
and make a new component in it. We need a radio input for each category.
You'll also need to create a state value to keep track of which radio is selected. Since this state will be needed to filter the DishList
component it will need to live in their shared parent (App
) and be passed down as a prop (just like the min/max state).
You can use the checked
prop to determine which radio should be checked, based on the current state value. Here's an example:
You should end up with something like this:
Now we need to filter our list by category as well as the existing price filters. Use your category state value to filter the array in DishList
. Make sure you keep the price filter working.
If everything is hooked up correctly you should see something like this 🎉
Add a default "All" option to the category filter
Add a text input that lets users search for dishes by title
Make it look even better 💅
Learn the fundamentals of using SQL to query a database
In this workshop we will be learning SQL by running commands in our terminal.
Make sure you have .
We'll be using psql
, the Postgres command-line interface. This lets you run SQL queries and also provides some extra commands for working with the database. These extras start with a backslash character (e.g. \c
) whereas SQL is usually uppercase (e.g. CREATE DATABASE
).
Download the starter files and cd
into the directory. Type psql
in your terminal to enter the Postgres command-line interface. You can type ctrl + d to exit this at any time.
To create a database use the CREATE DATABASE
command and give it whatever name you like:
You should now be able to use \list
to list all the databases on your machine. Hopefully the new blog_workshop
is there. You can type q
to exit this view.
You can then connect to the new database using the \connect
:
Now you need to populate the database with some data. The init.sql
file contains a bunch of SQL commands. They create some tables and then insert data into them.
You can use \include
to run some SQL directly from a file (which saves a lot of typing):
If you run \dt
you should see all the database tables we just created (blog_posts
, blog_comments
and users
).
A "schema" represents all the different things in a database. It says what type of data goes in each column, what columns are in each table, and how tables relate to each other. The schema is represented by the initial SQL used to create the tables (here inside the init.sql
file).
SQL requires us to specify what type of data we're going to use for each entry in advance. Here's a small subset of available types:
SERIAL
An auto-incrementing number. Useful for IDs where each new entry needs a unique value. SQL will automatically create this when you inser an entry.
VARCHAR(255)
A variable-length string. The number in brackets specifies the maximum number of characters.
TEXT
A string of any length.
INTEGER
A whole number (like 20
). No fractions allowed.
A way to provide additional fine-tuning of a data type. Think of it like input validation. Here are a few useful constraints:
NOT NULL
This value is required and must always be set.
PRIMARY KEY
This value is the unique identifier for this entry into the table. Often a SERIAL
so you don't have to worry about creating unique IDs yourself.
REFERENCES
This value must match one in another table, like users(id)
. Used to link tables together so you can find related information (e.g. which user wrote this blog post).
This specific database represents a blog site. It has users who can write blog posts, and blog posts that can contain comments.
A blog post has to have an author, so each entry in blog_posts
has a user_id
, which REFERENCES
an id
in the users
table. This links the two together, so for any given post we can always find the author.
Comments are linked to both a user
and a blog_post
, so they have two REFERENCES
: post_id
and user_id
.
Here is the example schema for the blog_post
table:
Here's a quick overview of some SQL commands used to retrieve data from a database.
SELECT
would retrieve the first_name
column for every row in the users
table.
Note you can provide comma-separated lists of column names and table names if you want to select multiple things. You can also use the *
character to select all columns.
WHERE
would retrive the first name column for any users with an ID of 1
.
AND
, OR
and NOT
would retrieve the first name column for any users with an ID of 1
or 2
.
IN
would select the first name column for any users with an ID of 1
or 2
.
This is similar to the OR
operator we saw above.
Select specific columns
Expected Result
Select users conditionally
Expected Result
Select users using multiple conditions
Using SELECT
and WHERE
, retrieve the first, last name and location of the user who lives in Saxilby, UK
and is older than 40.
Expected Result
Select posts using multiple conditions
Expected Result
Here's an overview of SQL commands used to add data to a database.
INSERT INTO
would create a new user row with a username of 'oliverjam'
and first name of 'oli'
.
UPDATE
would update the first name of the user with username "oliverjam"
to be "oliver"
.
RETURNING
You can access the created/changed rows with a RETURNING
clause after your INSERT
or UPDATE
. This lets you specify which columns you want back. This saves you doing a whole extra SELECT
after an insert just to get the new entry's ID.
Would return:
Adding a new post
Expected Result
Updating an existing post
You can then run SELECT user_id FROM blog_posts WHERE text_content='Hello World';
to test for the expected result.
Expected Result
There are different types of joins that determine exactly what data is returned. Since we're selecting from multiple tables we namespace our columns with the table name and a .
, just like object access in JavaScript (e.g. SELECT users.username, blog_posts.text_content
).
INNER JOIN
INNER JOIN
returns only the the users that have blog posts.
LEFT JOIN
LEFT JOIN
selects one extra row here compared to INNER JOIN
: the final user "Spont1935" who has no blog posts.
RIGHT JOIN
Selecting users and comments
Expected Result
Selecting blog posts and comments
Expected Result
Bonus: select the user who made a comment
Expand your previous solution to also include the username of the user who made each comment.
Expected Result
You can nest SQL expressions. For example:
is the equivalent of:
if there's a human with ID 1 and name 'oli'. The nested query is resolved first, similar to using brackets in maths.
Add a new comment to the post_comments
table. It should have a user ID of 3
and text content 'Interesting post'
. The comment should be linked to whichever post has text content of 'Peculiar trifling absolute and wandered vicinity property yet.'
(i.e. its post_id
should be the ID of that post).
You can then run SELECT text_content FROM post_comments WHERE post_id = 2;
to test for the expected result.
is used to transform each value in an array. It takes a function as an argument, then loops over each element in the array and calls the function with each one. Whatever that function returns is used as a new value in a new array.
is used to remove elements you don't want from an array. It takes a function as an argument, then loops over each element in the array and calls the function with each one. If the function returns true the element is kept, otherwise it is filtered out.
is used to check whether every element in an array meets a certain criteria. It takes a function as an argument, then loops over each element in the array and calls the function with each one. If the function returns false for any of the elements the iteration stops and false
is immediately returned. If the function returns true for every element then true
is returned.
is used to check whether at least one element in an array meets a certain criteria. It takes a function as an argument, then loops over each element in the array and calls the function with each one. If the function returns true
for any of the elements the iteration stops and true
is immediately returned. Otherwise it returns false
.
is used to get the first element in an array that meets a certain criteria. It takes a function as an argument, then loops over each element in the array and calls the function with each one. If the function returns true
for the element the iteration stops and the element is immediately returned. If the function returns false
for every element then undefined
is returned.
is used to transform an array into a single value. It takes a function and an initial "accumulator" value as arguments. It loops over the array, building up the accumulator on each loop.
is used to turn nested arrays into "flattened" ones. It takes an optional depth argument to flatten arrays nested more than one level down.
retrieves data from a table. You need to combine it with FROM
to specify which table. For example:
is a clause that qualifies a SELECT
. It lets you filter which rows are retrieved based on the values in that row. For example:
are operators for expressing logic in your WHERE
clauses. They let you apply multiple conditions. For example:
The operator lets you match against a list of values in your WHERE
clause. For example:
Using , retrieve a list of only usernames and locations from the users
table
Using SELECT
and , retrieve every column for all users who are older than 40.
Using WHERE
and , retrieve the user ID and text content columns for posts created by users with IDs of 2
or 3
.
lets you add a new row into a table. You specify a table name and list of columns, then a list of values to insert. The values have to match positions with their respective columns (like function arguments in JS).
lets you change existing data in a table. You provide the table name, then the name and new value of each column. You also need to provide a WHERE
clause to select which rows to update, otherwise every row will be changed.
Using and RETURNING
, add a blog post with the text "Hello World" to the user with ID 1
. Return the text content and user ID of the inserted post.
Using , update the blog post from the previous question to change the author to the user with ID 2
. Make sure you don't change any other posts.
We can use s to select columns from multiple tables at once, based on a relation they share. Joins effectively combine multiple tables into one temporary table for you to query.
selects rows that have matching values in both tables being selected from. For example if we wanted to select all the users who have blogposts, then get their usernames and their blog posts' text content:
selects every entry in the first table you name, but only matched records from the second. For example if we wanted a list of every user, plus their blog posts' text content (if they have any):
is like the opposite of LEFT JOIN
. With our blog post data the result would be the same as an INNER JOIN
, since every post must have an author.
Using select every user's location, plus the content of any comments they've made.
Using select only blog posts with comments, returning the text_content of the blog posts and the text_content of the comments.
Column
Type
Constraints
id
SERIAL
PRIMARY KEY
user_id
INTEGER
REFERENCES users(id)
text_content
TEXT
first_name
Alisha
Chelsea
...
first_name
Alisha
first_name
Alisha
Chelsea
first_name
Alisha
Chelsea
username
location
Sery1976
Middlehill, UK
Notne1991
Sunipol, UK
Moull1990
Wanlip, UK
Spont1935
Saxilby, UK
id
username
age
first_name
last_name
location
3
Moull1990
41
Skye
Hobbs
Wanlip, UK
4
Spont1935
72
Matthew
Griffin
Saxilby, UK
first_name
last_name
location
Matthew
Griffin
Saxilby, UK
user_id
text_content
2
Peculiar trifling absolute and wandered vicinity property yet. decay.
3
Far stairs now coming bed oppose hunted become his.
id
username
1
oliverjam
text_content
user_id
Hello World
1
user_id
2
username
text_content
Sery1976
Announcing of invitation principles in.
Notne1991
Peculiar trifling absolute and wandered vicinity property yet. son.
Moull1990
Far stairs now coming bed oppose hunted become his.
username
text_content
Sery1976
Announcing of invitation principles in.
Notne1991
Peculiar trifling absolute and wandered vicinity property yet.son.
Moull1990
Far stairs now coming bed oppose hunted become his.
Spont1935
location
text_content
Middlehill, UK
Sunipol, UK
Great blog post!
Wanlip, UK
Saxilby, UK
text_content
text_content
Far stairs now coming bed oppose hunted become his.
Great blog post!
text_content
text_content
username
Far stairs now coming bed oppose hunted become his.
Great blog post!
Notne1991
text_content
Interesting post