Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierRaspberry Pi

Play your retro console on a modern TV

Par Ryan Lambie

Want to connect your retro console to your modern TV? The latest issue of Wireframe magazine has the only guide you need, courtesy of My Life in Gaming’s Mark Duddleson.

“Get a Raspberry Pi. Done.” It’s probably the most frequently recurring comment we get across all videos on the My Life in Gaming YouTube channel, which often revolve around playing classic games on original hardware. Not everyone has held onto their old consoles through the years, so I get it.

 PS1Digital on a 4K OLED TV
PS1Digital on a 4K OLED TV

Software emulation, whether through a PC, Raspberry Pi, or any other device, is easy on your wallet and solid enough to give most people the experience they’re looking for.

But for me, the core of my gaming experience still tends to revolve around the joy I feel in using authentic cartridges and discs. But as you may have noticed, 2021 isn’t 2001, and using pre-HDMI consoles isn’t so easy these days. A standard CRT television is the most direct route to getting a solid experience with vintage consoles.

 Standard RCA cables with composite video. A direct HDTV connection is a poor experience
Standard RCA cables with composite video. A direct HDTV connection is a poor experience

But let’s face it – not everyone is willing to work a CRT into their setup. Plenty of people are content with just plugging the cables that came with their old systems (usually composite) into their HD or 4K TV – and that’s OK! But whether for the blurry looks or the input lag they feel, this simply isn’t good enough for a lot of people.

Down the rabbit hole

“There has to be a better way,” you say as you browse Amazon’s assortment of analogue-to- HDMI converters, HDMI adapters like Wii2HDMI, or HDMI cables for specific consoles by a variety of brands. You might think these are just what you’re looking for, but remember: your TV has its own internal video processor. Just like your TV, they’re going to treat 240p like 480i. Not only is it unnecessary to deinterlace 240p, but doing so actively degrades the experience – motion- adaptive deinterlacing takes time, adding input lag.

RetroTINK-2X MINI (left) and 2X Pro (right). The MINI pairs great with N64
RetroTINK-2X MINI (left) and 2X Pro (right). The MINI pairs great with N64

That Sega Saturn HDMI cable is going to deinterlace your gorgeous 240p sprite-based games so hard that they’ll look like some sort of art restoration disaster in motion. The dark secret of these products is that you’re buying something you already own – a basic video processor designed for video, not video games, and the result will likely not be tangibly better than what your TV could do. The only reason to go this route is if you have no analogue inputs and could not possibly invest more than $30.

So what is the better way? The primary purpose of an external video processor is to send a properly handled signal to your TV that won’t trigger its lag-inducing processes and turn your pixels into sludge – basically any progressive resolution other than 240p. Luckily, there are several devices in various price ranges that are designed to do exactly this.

There is lots more to learn!

This is just a tiny snippet of the mammoth feature in Wireframe magazine issue 49. The main feature includes a ‘jargon cheat sheet’ and ‘cable table’ to make sure any level of user can get their retro console working on a modern TV.

If you’re not a Wireframe magazine subscriber, you can download a PDF copy for free. Head to page 50 to get started.

You can read more features like this one in Wireframe issue 49, available directly from Raspberry Pi Press — we deliver worldwide.

The post Play your retro console on a modern TV appeared first on Raspberry Pi.

Raspberry Pi automatically refills your water bottle

Par Ashley Whittaker

YouTuber Chris Courses takes hydration seriously, but all those minutes spent filling up water bottles take a toll. 15 hours per year, to be exact. Chris regularly uses three differently sized water bottles and wanted to build something to fill them all to their exact measurements.

(Polite readers may like to be warned of a couple of bleeped swears and a rude whiteboard drawing a few minutes into this video.)

Hardware

  • Raspberry Pi
  • Water filter (Chris uses this one, which you would find in a fridge with a built-in water dispenser)
  • Solenoid valve (which only opens when an electrical signal is sent to it)
  • solenoid valve
  • pi in case
The solenoid valve and Raspberry Pi, which work together to make this project happen

How does the hardware work?

The solenoid valve determines when water can and cannot pass through. Mains water comes in through one tube and passes through the water filter, then the solenoid valve releases water via another tube into the bottle.

Diagram of the water bottle filler setup, hand-drawn by the maker
See – simples!

What does the Raspberry Pi do?

The Raspberry Pi sends a signal to the solenoid valve telling it to open for a specific amount of time — the length of time it takes to fill a particular water bottle — and to close when that time expires. Chris set this up to start running when he clicks a physical button.

We feel the same way about Raspberry Pi, Chris

Chris also programmed lights to indicate when the dispenser is turned on. This manual coding proved to be the most time-consuming part of the project.

But all the wires look so ugly!

Water dispenser mounted onto side of fridge
Sleek and discreet

Chris agreed, so he 3D-printed a beautiful enclosure to house what he dubs the ‘Hydrobot 5000’. It’s a sleek black casing that sits pretty in his kitchen on a wall next to the fridge. It took a fair bit of fridge shuffling and electrical mounting to “sit pretty”, however. This Raspberry Pi-powered creation needed to be connected to a water source, so the tubing had to be snaked from Hydrobot 5000, behind appliances, to the kitchen sink.

Check out those disco lights! Nice work, Chris. Follow Chris on YouTube for loads more coding and dev videos.

The post Raspberry Pi automatically refills your water bottle appeared first on Raspberry Pi.

Raspberry Pi Zero W turns iPod Classic into Spotify music player

Par Ashley Whittaker

Recreating Apple’s iconic iPod Classic as a Spotify player may seem like sacrilege but it works surprisingly well, finds Rosie Hattersley. Check out the latest issue of The MagPi magazine (pg 8 – 12) for a tutorial to follow if you’d like to create your own.

Replacement Raspberry Pi parts laying inside an empty iPod case to check they will fit
Replacement Raspberry Pi parts laying inside an empty iPod case to check they will fit

When the original iPod was launched, the idea of using it to run anything other than iTunes seemed almost blasphemous. The hardware remains a classic, but our loyalties are elsewhere with music services these days. If you still love the iPod but aren’t wedded to Apple Music, Guy Dupont’s Spotify hack makes a lot of sense. “It’s empowering as a consumer to be able to make things work for me – no compromises,” he says. His iPod Classic Spotify player project cost around $130, but you could cut costs with a different streaming option.

“I wanted to explore what Apple’s (amazing) original iPod user experience would feel like in a world where we have instant access to tens of millions of songs. And, frankly, it was really fun to take products from two competitors and make them interact in an unnatural way.” 

Guy Dupont
Installing the C-based haptic code on Raspberry Pi Zero, and connecting Raspberry Pi, display, headers, and leads
Installing the C-based haptic code on Raspberry Pi Zero, and connecting Raspberry Pi, display, headers, and leads

Guy’s career spans mobile phone app development, software engineering, and time in recording studios in Boston as an audio engineer, so a music tech hack makes sense. He first used Raspberry Pi for its static IP so he could log in remotely to his home network, and later as a means of monitoring his home during a renovation project. Guy likes using Raspberry Pi when planning a specific task because he can “program [it] to do one thing really well… and then I can leave it somewhere forever”, in complete contrast to his day job. 

Mighty micro

Guy seems amazed at having created a Spotify streaming client that lives inside, and can be controlled by, an old iPod case from 2004. He even recreated the iPod’s user interface in software, right down to the font. A ten-year-old article about the click wheel provided some invaluable functionality insights and allowed him to write code to control it in C. Guy was also delighted to discover an Adafruit display that’s the right size for the case, doesn’t expose the bezels, and uses composite video input so he could drive it directly from Raspberry Pi’s composite out pins, using just two wires. “If you’re not looking too closely, it’s not immediately obvious that the device was physically modified,” he grins.

All replacement parts mounted in the iPod case
All replacement parts mounted in the iPod case

Guy’s retro iPod features a Raspberry Pi Zero W. “I’m not sure there’s another single-board computer this powerful that would have fit in this case, let alone one that’s so affordable and readily available,” he comments. “Raspberry Pi did a miraculous amount of work in this project.” The user interface is a Python app, while Raspberry Pi streams music from Spotify via Raspotify, reads user input from the iPod’s click wheel, and drives a haptic motor – all at once. 

Guy managed to use a font for the music library that looks almost exactly the same as Apple’s original
Guy managed to use a font for the music library that looks almost exactly the same as Apple’s original

Most of the hardware for the project came from Guy’s local electronics store, which has a good line in Raspberry Pi and Adafruit components. He had a couple of attempts to get the right size of haptic motor, but most things came together fairly easily after a bit of online research. Help, when he needed it, was freely given by the Raspberry Pi community, which Guy describes as “incredible”.

Things just clicked 

Guy previously used Raspberry Pi to stream albums around his home
Guy previously used Raspberry Pi to stream albums around his home

Part of the fun of this project was getting the iPod to run a non-Apple streaming service, so he’d also love to see versions of the iPod project using different media players. You can follow his instructions on GitHub.

Next, Guy intends to add a DAC (digital to analogue converter) for the headphone jack, but Bluetooth works for now, even connecting from inside his jacket pocket, and he plans to get an external USB DAC in time. 

The post Raspberry Pi Zero W turns iPod Classic into Spotify music player appeared first on Raspberry Pi.

214 teams granted Flight Status for Astro Pi Mission Space Lab 2020/21!

Par Claire Given

The Raspberry Pi Foundation and ESA Education are excited to announce that 214 teams participating in Mission Space Lab of this year’s European Astro Pi Challenge have achieved Flight Status. That means they will have their computer programs run on the International Space Station (ISS) later this month!

ESA Astronaut Thomas Pesquet with the Astro Pi computers onboard the ISS.
ESA Astronaut Thomas Pesquet with the Astro Pi computers onboard the ISS

Mission Space Lab gives teams of students and young people up to 19 years of age the amazing opportunity to conduct scientific experiments aboard the ISS, by writing code for the Astro Pi computers — Raspberry Pi computers augmented with Sense HATs. Teams can choose between two themes for their experiments, investigating either life in space or life on Earth.

Life in space

For ‘Life in space’ experiments, teams use the Astro Pi computer known as Ed to investigate life inside the Columbus module of the ISS. For example, past teams have:

  • Used the Astro Pi’s accelerometer sensor to compare the motion of the ISS during normal flight compared to its motion during course corrections and reboost manoeuvres
  • Investigated whether influenza is transmissible on a spacecraft such as the ISS
  • Monitored pressure inside the Columbus module to be able to warn the astronauts on board of space debris or micrometeoroids colliding with the station
  • And much more
Compilation of photographs of Earth, taken by Astro Pi Izzy aboard the ISS.
Compilation of photographs of Earth, taken by Astro Pi Izzy aboard the ISS

Life on Earth

In ‘Life on Earth’ experiments, teams investigate life on our home planet’s surface using the Astro Pi computer known as Izzy. Izzy’s near-infrared camera (with a blue optical filter) faces out of a window in the ISS and is pointed at Earth. For example, past teams have:

  • Investigated variations in Earth’s magnetic field
  • Used machine learning to identify geographical areas that had recently suffered from wildfires
  • Studied climate change based on coastline erosion over the past 30 years
  • And much besides

Phase 1 and 2 of Mission Space Lab

In Phase 1 of Mission Space Lab, teams only have to submit an experiment idea. Our team then judges the teams’ ideas based on their originality, feasibility, and use of hardware. This year, 426 teams submitted experiment ideas, with 396 progressing to Phase 2.

Timeline of Mission Space Lab in 2020/2021, part of the European Astro Pi Challenge.
Timeline of Mission Space Lab in 2020/21 — click to enlarge

At the beginning of Phase 2 of the challenge, we send our special Astro Pi kits to the teams to help them write and test their programs. The kits contain hardware that is similar to the Astro Pi computers in space, including a Raspberry Pi 3 Model B, Raspberry Pi Sense HAT, and Raspberry Pi Camera Modules (V2 and NoIR).

Astro Pi kit box.

Mission Space Lab teams then write the programs for their experiments in Python. Once teams are happy with their programs, have tested them on their Astro Pi kits, and submitted them to us for judging, we run a series of tests on them to ensure that they follow experiment rules and can run without errors on the ISS. The experiments that meet the relevant criteria are then awarded Flight Status.

Phase 3: Flight Status achieved

The 214 teams awarded flight status this year represent 21 countries and 862 young people, with 30% female participants. 137 teams with ‘Life on Earth’ experiments and 77 teams with ‘Life in space’ experiments have successfully made it through to Phase 3.

Spain has the most teams progressing to the next phase (26), closely followed by the UK (25), Romania (21), France (21) and Greece (18).

In the next few weeks, the teams’ experiments will be deployed to the Astro Pi computers on the ISS, and most of them will run overseen by ESA Astronaut Thomas Pesquet, who is going to fly to the ISS on 22 April on his new mission, Alpha.

In the final phase, we’ll send the teams the data their experiments collect, to analyse and write short reports about their findings. Based on these reports, we and the ESA Education experts will determine the winner of this year’s Mission Space Lab. The winning and highly commended teams will receive special prizes. Last year’s outstanding teams got to take part in a Q&A with ESA astronaut Luca Parmitano!

Well done to everyone who has participated, and congratulations to all the successful teams. We are really looking forward to reading your reports!

Logo of Mission Space Lab, part of the European Astro Pi Challenge.

The post 214 teams granted Flight Status for Astro Pi Mission Space Lab 2020/21! appeared first on Raspberry Pi.

Remake Manic Miner’s collapsing platforms | Wireframe #49

Par Ryan Lambie

Traverse a crumbly cavern in our homage to a Spectrum classic. Mark Vanstone has the code

One of the most iconic games on the Sinclair ZX Spectrum featured a little man called Miner Willy, who spent his days walking and jumping from platform to platform collecting the items needed to unlock the door on each screen. Manic Miner’s underground world featured caverns, processing plants, killer telephones, and even a forest featuring little critters that looked suspiciously like Ewoks.

Written by programmer Matthew Smith and released by Bug-Byte in 1983, the game became one of the most successful titles on the Spectrum. Smith was only 16 when he wrote Manic Miner and even constructed his own hardware to speed up the development process, assembling the code on a TRS-80 and then downloading it to the Spectrum with his own hand-built interface. The success of Manic Miner was then closely followed by Jet Set Willy, featuring the same character, and although they were originally written for the Spectrum, the games very soon made it onto just about every home computer of the time.

Miner Willy makes his way to the exit, avoiding those vicious eighties telephones.

Both Manic Miner and Jet Set Willy featured unstable platforms which crumbled in Willy’s wake, and it’s these we’re going to try to recreate this month. In this Pygame Zero example, we need three frames of animation for each of the two directions of movement. As we press the arrow keys we can move the Actor left and right, and in this case, we’ll decide which frame to display based on a count variable, which is incremented each time our update() function runs. We can create platforms from a two-dimensional data list representing positions on the screen with 0 meaning a blank space, 1 being a solid platform, and 2 a collapsible platform. To set these up, we run through the list and make Actor objects for each platform segment.

For our draw() function, we can blit a background graphic, then Miner Willy, and then our platform blocks. During our update() function, apart from checking key presses, we also need to do some gravity calculations. This will mean that if Willy isn’t standing on a platform or jumping, he’ll start to fall towards the bottom of the screen. Instead of checking to see if Willy has collided with the whole platform, we only check to see if his feet are in contact with the top. This means he can jump up through the platforms but will then land on the top and stop. We set a variable to indicate that Willy’s standing on the ground so that when the SPACE bar is pressed, we know if he can jump or not. While we’re checking if Willy’s on a platform, we also check to see if it’s a collapsible one, and if so, we start a timer so that the platform moves downwards and eventually disappears. Once it’s gone, Willy will fall through. The reason we have a delayed timer rather than just starting the platform heading straight down is so that Willy can run across many tiles before they collapse, but his way back will quickly disappear. The disappearing platforms are achieved by changing the image of the platform block as it moves downward.

As we’ve seen, there were several other elements to each Manic Miner screen, such as roaming bears that definitely weren’t from Star Wars, and those dastardly killer telephones. We’ll leave you to add those.

Here’s Mark’s code for a Manic Miner-style platformer. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 49

You can read more features like this one in Wireframe issue 49, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 49 for free in PDF format.

The post Remake Manic Miner’s collapsing platforms | Wireframe #49 appeared first on Raspberry Pi.

Easter fun with Raspberry Pi

Par Ashley Whittaker

Easter is nearly upon us, and we’ll be stepping away from our home-office desks for a few days. Before we go, we thought we’d share some cool Easter-themed projects from the Raspberry Pi community.

Egg-painting robot

Teacher Klaus Rabeder designed, 3D-printed, and built a robot which his students programmed in Python to paint eggs with Easter designs. Each student came up with their own design and then programmed the robot to recreate it. The robot can draw letters and numbers, patterns, and figures (such as an Easter bunny) on an egg, as well as a charming meadow made of randomly calculated blades of grass. Each student took home the egg bearing their unique design.

The machine has three axes of movement: one that rotates the egg, one that moves the pens up and down, and one that makes servo motors put the pen tips onto the egg’s surface. Each servo is connected to two pens. Springs between the servo and pen make sure not too much pressure is applied.

What a cool way to spend your computing lessons!

Digital Easter egg hunt

eggs in foil with jumper wires attached
Go digital this Easter

Why hunt for chocolate eggs in a race against time before they melt, when you can go digital? Our very own Alex made this quick and easy game with a Raspberry Pi, a few wires, and some simple code. Simply unwrap your chocolate eggs and rewrap them with the silver side of the foil facing outwards to make them more conductive. The wires create a circuit, and when the circuit is closed with the foil-wrapped egg, the Raspberry Pi reveals the location of a bigger chocolate egg.

All the code and kit you need to recreate this game yourself is here.

Incubate baby chicks

  • inside a chick incubater
  • children watching over the incubator

The second-best thing about this time of year — after all the chocolate — is the cute baby animals. Lambs and bunnies get a special mention, but this project makes sure that chicken eggs are properly incubated to help baby chicks hatch. Maker Dennis Hejselbak added a live-streaming camera so he and other chick fans can keep an eye on things.

We’re sad to report that Emma still hasn’t revised her ‘No office chicks’ policy since we first reported this project back in 2015. Maybe next year?

Happy Easter!

Stand by for a delicious new issue of Wireframe magazine tomorrow. We’ll see you on Tuesday!

The post Easter fun with Raspberry Pi appeared first on Raspberry Pi.

Drag-n-drop coding for Raspberry Pi Pico

Par Ashley Whittaker

Introducing Piper Make: a Raspberry Pi Pico-friendly drag-n-drop coding tool that’s free for anyone to use.

piper make screenshot
The ‘Digital View’ option displays a dynamic view of Raspberry Pi Pico showing GPIO states

Edtech startup Piper, Inc. launched this brand new browser-based coding tool on #PiDay. If you already have a Raspberry Pi Pico, head to make.playpiper.com and start playing with the coding tool for free.

Pico in front of Piper Make screen
If you already have a Raspberry Pi Pico, you can get started right away

Complete coding challenges with Pico

  • Piper Make challenges 01
Learn about circuits and sensors as you complete each coding challenge

The block coding environment invites you to try a series of challenges. When you succeed in blinking an LED, the next challenge is opened up to you. New challenges are released every month, and it’s a great way to guide your learning and give you a sense of achievement as you check off each task.

But I don’t have a Pico or the components I need!

You’re going to need some kit to complete these challenges. The components you’ll need are easy to get hold of, and they’re things you probably already have lying around if you like to tinker, but if you’re a coding newbie and don’t have a workshop full of trinkets, Piper makes it easy for you. You can join their Makers Club and receive a one-off Starter Kit containing a Raspberry Pi Pico, LEDs, resistors, switches, and wires.

Piper Make starter kit
The Starter Kit contains everything you need to complete the first challenges

If you sign up to Piper’s Monthly Makers Club you’ll receive the Starter Kit, plus new hardware each month to help you complete the latest challenge. Each Raspberry Pi Pico board ships with Piper Make firmware already loaded, so you can plug and play.

Piper Make starter kit in action
Trying out the traffic light challenge with the Starter Kit

If you already have things like a breadboard, LEDs, and so on, then you don’t need to sign up at all. Dive straight in and get started on the challenges.

I have a Raspberry Pi Pico. How do I play?

A quick tip before we go: when you hit the Piper Make landing page for the first time, don’t click ‘Getting Started’ just yet. You need to set up your Pico first of all, so scroll down and select ‘Setup my Pico’. Once you’ve done that, you’re good to go.

Scroll down on the landing page to set up your Pico before hitting ‘Getting Started’

The post Drag-n-drop coding for Raspberry Pi Pico appeared first on Raspberry Pi.

Graphic routines for Raspberry Pi Pico screens

Par Ashley Whittaker

Pimoroni has brought out two add‑ons with screens: Pico Display and Pico Explorer. A very basic set of methods is provided in the Pimoroni UF2 file. In this article, we aim to explain how the screens are controlled with these low-level instructions, and provide a library of extra routines and example code to help you produce stunning displays.

You don't have to get creative with your text placement, but you can
You don’t have to get creative with your text placement, but you can

You will need to install the Pimoroni MicroPython UF2 file on your Pico and Thonny on your computer.

All graphical programs need the following ‘boilerplate’ code at the beginning to initialise the display and create the essential buffer. (We’re using a Pico Explorer – just change the first line for a Pico Display board.)

import picoexplorer as display
# import picodisplay as display
#Screen essentials
width = display.get_width()
height = display.get_height()
display_buffer = bytearray(width * height * 2)
display.init(display_buffer)

The four buttons give you a way of getting data back from the user as well as displaying information
The four buttons give you a way of getting data back from the user as well as displaying information

This creates a buffer with a 16-bit colour element for each pixel of the 240×240 pixel screen. The code invisibly stores colour values in the buffer which are then revealed with a display.update() instruction.

The top-left corner of the screen is the origin (0,0) and the bottom-right pixel is (239,239).

Supplied methods

display.set_pen(r, g, b)

Sets the current colour (red, green, blue) with values in the range 0 to 255.

grey = display.create_pen(100,100,100)

Allows naming of a colour for later use.

display.clear()

Fills all elements in the buffer with the current colour.

display.update()

Makes the current values stored in the buffer visible. (Shows what has been written.)

display.pixel(x, y)

Draws a single pixel with the current colour at
point(x, y).

display.rectangle(x, y ,w ,h) 

Draws a filled rectangle from point(x, y), w pixels wide and h pixels high.

display.circle(x, y, r)

Draws a filled circle with centre (x, y) and radius r.

display.character(78, 112, 5 ,2)

Draws character number 78 (ASCII = ‘N’) at point (112,5) in size 2. Size 1 is very small, while 6 is rather blocky.

display.text("Pixels", 63, 25, 200, 4)

Draws the text on the screen from (63,25) in size 4 with text wrapping to next line at a ‘space’ if the text is longer than 200 pixels. (Complicated but very useful.)

display.pixel_span(30,190,180)

Draws a horizontal line 180 pixels long from point (30,190).

display.set_clip(20, 135, 200, 100)

While the screens are quite small in size, they have plenty of pixels for display
While the screens are quite small in size, they have plenty of pixels for display

After this instruction, which sets a rectangular area from (20,135), 200 pixels wide and 100 pixels high, only pixels drawn within the set area are put into the buffer. Drawing outside the area is ignored. So only those parts of a large circle intersecting with the clip are effective. We used this method to create the red segment.

display.remove_clip()

This removes the clip.

display.update()

This makes the current state of the buffer visible on the screen. Often forgotten.

if display.is_pressed(3): # Y button is pressed ?

Read a button, numbered 0 to 3.

You can get more creative with the colours if you wish
You can get more creative with the colours if you wish

This code demonstrates the built-in methods and can be downloaded here.

# Pico Explorer - Basics
# Tony Goodhew - 20th Feb 2021
import picoexplorer as display
import utime, random
#Screen essentials
width = display.get_width()
height = display.get_height()
display_buffer = bytearray(width * height * 2)
display.init(display_buffer)

def blk():
    display.set_pen(0,0,0)
    display.clear()
    display.update()

def show(tt):
    display.update()
    utime.sleep(tt)
   
def title(msg,r,g,b):
    blk()
    display.set_pen(r,g,b)
    display.text(msg, 20, 70, 200, 4)
    show(2)
    blk()

# Named pen colour
grey = display.create_pen(100,100,100)
# ==== Main ======
blk()
title("Pico Explorer Graphics",200,200,0)
display.set_pen(255,0,0)
display.clear()
display.set_pen(0,0,0)
display.rectangle(2,2,235,235)
show(1)
# Blue rectangles
display.set_pen(0,0,255)
display.rectangle(3,107,20,20)
display.rectangle(216,107,20,20)
display.rectangle(107,3,20,20)
display.rectangle(107,216,20,20)
display.set_pen(200,200,200)
#Compass  points
display.character(78,112,5,2)   # N
display.character(83,113,218,2) # S
display.character(87,7,110,2)   # W
display.character(69,222,110,2) # E
show(1)
# Pixels
display.set_pen(255,255,0)
display.text("Pixels", 63, 25, 200, 4)
display.set_pen(0,200,0)
display.rectangle(58,58,124,124)
display.set_pen(30,30,30)
display.rectangle(60,60,120,120)
display.update()
display.set_pen(0,255,0)
for i in range(500):
    xp = random.randint(0,119) + 60
    yp = random.randint(0,119) + 60
    display.pixel(xp,yp)
    display.update()
show(1)
# Horizontal line
display.set_pen(0,180,0)
display.pixel_span(30,190,180)
show(1)
# Circle
display.circle(119,119,50)
show(1.5)
display.set_clip(20,135, 200, 100)
display.set_pen(200,0,0)
display.circle(119,119,50)
display.remove_clip()

display.set_pen(0,0,0)
display.text("Circle", 76, 110, 194, 3)
display.text("Clipped", 85, 138, 194, 2)
display.set_pen(grey) # Previously saved colour
# Button Y
display.text("Press button y", 47, 195, 208, 2)
show(0)
running = True
while running:
    if display.is_pressed(3): # Y button is pressed ?
        running = False
blk()

# Tidy up
title("Done",200,0,0)
show(2)
blk()

Straight lines can give the appearance of curves
Straight lines can give the appearance of curves

We’ve included three short procedures to help reduce code repetition:

def blk() 

This clears the screen to black – the normal background colour.

def show(tt)

This updates the screen, making the buffer visible and then waits tt seconds.

def title(msg,r,g,b)

This is used to display the msg string in size 4 text in the specified colour for two seconds, and then clears the display.

As you can see from the demonstration, we can accomplish a great deal using just these built-in methods. However, it would be useful to be able to draw vertical lines, lines from point A to point B, hollow circles, and rectangles. If these are written as procedures, we can easily copy and paste them into new projects to save time and effort.

You don't need much to create interesting graphics
You don’t need much to create interesting graphics

In our second demonstration, we’ve included these ‘helper’ procedures. They use the parameters (t, l, r, b) to represent the (top, left) and the (right, bottom) corners of rectangles or lines.

def horiz(l,t,r):    # left, top, right

Draws a horizontal line.

def vert(l,t,b):   # left, top, bottom

Draws a vertical line.

def box(l,t,r,b):  # left, top, right, bottom

Draws an outline rectangular box.

def line(x,y,xx,yy): 

Draws a line from (x,y) to (xx,yy).

def ring(cx,cy,rr,rim): # Centre, radius, thickness

Draws a circle, centred on (cx,cy), of outer radius rr and pixel thickness of rim. This is easy and fast but has the disadvantage that it wipes out anything inside ring

def ring2(cx,cy,r):   # Centre (x,y), radius

Draw a circle centred on (cx,cy), of radius rr with a single-pixel width. Can be used to flash a ring around something already drawn on the screen. You need to import math as it uses trigonometry.

def align(n, max_chars):

This returns a string version of int(n), right aligned in a string of max_chars length. Unfortunately, the font supplied by Pimoroni in its UF2 is not monospaced.

What will you create with your Pico display?
What will you create with your Pico display?

The second demonstration is too long to print, but can be downloaded here.

It illustrates the character set, drawing of lines, circles and boxes; plotting graphs, writing text at an angle or following a curved path, scrolling text along a sine curve, controlling an interactive bar graph with the buttons, updating a numeric value, changing the size and brightness of disks, and the colour of a rectangle.  

The program is fully commented, so it should be quite easy to follow.

The most common coding mistake is to forget the display.update() instruction after drawing something. The second is putting it in the wrong place.

When overwriting text on the screen to update a changing value, you should first overwrite the value with a small rectangle in the background colour. Notice that the percentage value is right-aligned to lock the ‘units’ position. 

It’s probably not a good idea to leave your display brightly lit for hours at a time. Several people have reported the appearance of ‘burn’ on a dark background, or ‘ghost’ marks after very bright items against a dark background have been displayed for some time. We’ve seen them on our display, but no long-term harm is evident. Blanking the screen in the ‘tidy-up’ sequence at the end of your program may help.

We hope you have found this tutorial useful and that it encourages you to start sending your output to a display. This is so much more rewarding than just printing to the REPL.

If you have a Pimoroni Pico Display, (240×135 pixels), all of these routines will work on your board.

Issue 41 of HackSpace magazine is on sale NOW!

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store or your local newsagents. As always, every issue is free to download from the HackSpace magazine website.

The post Graphic routines for Raspberry Pi Pico screens appeared first on Raspberry Pi.

Raspberry Pi dog detector (and dopamine booster)

Par Ashley Whittaker

You can always rely on Ryder’s YouTube channel to be full of weird and wonderful makes. This latest offering aims to boost dopamine levels with dog spotting. Looking at dogs makes you happier, right? But you can’t spend all day looking out of the window waiting for a dog to pass, right? Well, a Raspberry Pi Camera Module and machine learning can do the dog spotting for you.

What’s the setup?

Ryder’s Raspberry Pi and camera sit on a tripod pointing out of a window looking over a street. Live video of the street is taken by the camera and fed through a machine learning model. Ryder chose the YOLO v3 object detection model, which can already recognise around 80 different things — from dogs to humans, and even umbrellas.

A hand holding a raspberry pi high quality camera pointing out of a window
Camera set up ready for dog spotting

Doggo passing announcements

But how would Ryder know that his Raspberry Pi had detected a dog? They’re so sneaky — they work in silence. A megaphone and some text-to-speech software make sure that Ryder is alerted in time to run to the window and see the passing dog. The megaphone announces: “Attention! There is a cute dog outside.”

A machine learning image with a human and a dog circled in different colours
The machine learning program clearly labels a ‘person’ and a ‘dog’

“Hey! Cute dog!”

Ryder wanted to share the love and show his appreciation to the owners of cute dogs, so he added a feature for when he is out of the house. With the megaphone poking out of a window, the Raspberry Pi does its dog-detecting as usual, but instead of alerting Ryder, it announces: “I like your dog” when a canine is walked past.

Raspberry Pi camera pointing out of a window connected to a megaphone which will announce when a dog passes by
When has a megaphone ever NOT made a project better?

Also, we’d like to learn more about this ‘Heather’ who apparently once scaled a six-foot fence to pet a dog and for whom Ryder built this. Ryder, spill the story in the comments!

The post Raspberry Pi dog detector (and dopamine booster) appeared first on Raspberry Pi.

Kay-Berlin Food Computer | The MagPi #104

Par Ashley Whittaker

In the latest issue of The MagPi Magazine, out today, Rob Zwetsloot talks to teacher Chris Regini about the incredible project his students are working on.

When we think of garden automation, we often think of basic measures like checking soil moisture and temperature. The Kay-Berlin Food Computer, named after student creators Noah Kay and Noah Berlin, does a lot more than that. A lot more.

At night, an IR LED floodlight allows for infrared camera monitoring via a Raspberry Pi NoIR Camera Module
At night, an IR LED floodlight allows for infrared camera monitoring via a Raspberry Pi NoIR Camera Module

“It is a fully automated growth chamber that can monitor over a dozen atmospheric and root zone variables and post them to an online dashboard for remote viewing,” Chris Regini tells us. He’s supervising both Noahs in this project. “In addition to collecting data, it is capable of adjusting fan speeds based on air temperature and humidity, dosing hydroponic reservoirs with pH adjustment and nutrient solutions via peristaltic pumps, dosing soil with water based on moisture sensor readings, adjusting light spectra and photoperiods, and capturing real-time and time-lapsed footage using a [Raspberry Pi] Camera Module NoIR in both daylight and night-time growth periods.”

Everything can be controlled manually or set to be autonomous. This isn’t just keeping your garden looking nice, this is the future of automated farming.

All the data is used for automation, but it’s accessible to students for manual control
All the data is used for automation, but it’s accessible to students for manual control

Seeds of knowledge

“The idea originated from the long standing MIT food computer project and lots of open-source collaboration in both the agriculture and Raspberry Pi communities,” Chris explains. “We’ve always had the hopes of creating an automated growing system that could collect long-term data for use in the ISS during space travel or in terrestrial applications where urbanisation or climate concerns required the growth of food indoors.”

With students doing a lot of learning from home in the past year, having such a system accessible online for interaction was important for Chris: “Adding a layer that could keep students engaged in this endeavour during remote learning was the catalyst that truly spurred on our progress.”

“All data is viewable in real time and historically,
“All data is viewable in real time and historically,

This level of control and web accessibility is perfect for Raspberry Pi, which Chris, his students, and his Code Club have been using for years.

“The fact that we had access to the GPIOs for sensors and actuators as well as the ability to capture photo and video was great for our application,” Chris says. “Being able to serve the collected data and images to the web, as well as schedule subroutines via systemd, made it the perfect fit for accessing our project remotely and having it run time-sensitive programs.”

There are six plants in the box, allowing for  a lot of data collection
There are six plants in the box, allowing for a lot of data collection

The computer has been in development for a while, but the students working on it have a wide range of skills that have made it possible.

“We have had a dedicated nucleus of students that have spent time learning plant science, electronic circuitry, Python, developing UIs, and creating housings in CAD,” Chris explains. “They all started as complete beginners and have benefited greatly from the amazing tutorials available to them through the Raspberry Pi Foundation website as well as the courses offered on FutureLearn.”

Grow beyond

“The entire system has a network of sensors... which monitor atmospheric variables of air temperature, humidity, CO2, O2, and air pressure.
The entire system has a network of sensors which monitor atmospheric variables of air temperature,
humidity, CO2, O2, and air pressure.

The project is ongoing – although they’re already getting a lot of data that is being used for citizen science.

“The system does a fantastic job collecting data and allowing us to visualise it via our Adafruit IO+ dashboards,” Chris says. “Upgrading our sensors and actuators to more reliable and accurate models has allowed the system to produce research level data that we are currently sharing in a citizen science project called Growing Beyond Earth. It is funded by NASA and is organised through Fairchild Botanical Gardens. We have been guided along the way by industry professionals in the field of hydroponics and have also collaborated with St. Louis-based MARSfarm to upgrade the chamber housing, reflective acrylic panels, and adjustable RGBW LED panel.  Linking our project with scientists, engineers, researchers, and entrepreneurs has allowed it to really take off.”

Get your copy of The Magpi #104 now!

You can grab the brand-new issue right now online from the Raspberry Pi Press store, or via our app on Android or iOS. You can also pick it up from supermarkets and newsagents, but make sure you do so safely while following all your local guidelines. There’s also a free PDF you can download.

MagPi 104 cover

The post Kay-Berlin Food Computer | The MagPi #104 appeared first on Raspberry Pi.

How to add Ethernet to Raspberry Pi Pico

Par Alasdair Allan

Raspberry Pi Pico has a lot of interesting and unique features, but it doesn’t have networking. Of course this was only ever going to be a temporary inconvenience, and sure enough, over Pi Day weekend we saw both USB Ethernet and Ethernet PHY support released for Pico and RP2040.

Raspberry Pi Pico and RMII Ethernet PHY
Raspberry Pi Pico and RMII Ethernet PHY

The PHY support was put together by Sandeep Mistry, well known as the author of the noble and bleno Node.js libraries, as well as the Arduino LoRa library, amongst others. Built around the lwIP stack, it leverages the PIO, DMA, and dual-core capabilities of RP2040 to create an Ethernet MAC stack in software. The project currently supports RMII-based Ethernet PHY modules like the Microchip LAN8720.

Breakout boards for the LAN8720 can be found on AliExpress for around $1.50. If you want to pick one up next day on Amazon you should be prepared to pay somewhat more, especially if you want Amazon Prime delivery, although they can still be found fairly cheaply if you’re prepared to wait a while.

What this means is that you can now connect your $4 microcontroller to an Ethernet breakout costing less than $2 and connect it to the internet.

Building from source

If you already have the Raspberry Pi Pico toolchain set up and working, make sure your pico-sdk checkout is up to date, including submodules. If not, you should first set up the C/C++ SDK. Afterwards you need grab the the project from GitHub, along with the lwIP stack.

$ git clone git@github.com:sandeepmistry/pico-rmii-ethernet.git
$ cd pico-rmii-ethernet
$ git submodule update --init

Make sure you have your PICO_SDK_PATH set before before proceeding. For instance, if you’re building things on a Raspberry Pi and you’ve run the pico_setup.sh script, or followed the instructions in our Getting Started guide, you’d point the PICO_SDK_PATH to

$ export PICO_SDK_PATH = /home/pi/pico/pico-sdk

then after that you can go ahead and build both the library and the example application.

$ mkdir build
$ cd build
$ cmake ..
$ make

If everything goes well you should have a UF2 file in build/examples/httpd called pico_rmii_ethernet_httpd.uf2. You can now load this UF2 file onto your Pico in the normal way.

Go grab your Raspberry Pi Pico board and a micro USB cable. Plug the cable into your Raspberry Pi or laptop, then press and hold the BOOTSEL button on your Pico while you plug the other end of the micro USB cable into the board. Then release the button after the board is plugged in.

A disk volume called RPI-RP2 should pop up on your desktop. Double-click to open it, and then drag and drop the UF2 file into it. Your Pico is now running a webserver. Unfortunately it’s not going to be much use until we wire it up to our Ethernet breakout board.

Wiring things up on the breadboard

Unfortunately the most common (and cheapest) breakout for the LAN8720 isn’t breadboard-friendly, although you can find some boards that are, so you’ll probably need to grab a bunch of male-to-female jumper wires along with your breadboard.

LAN8720 breakout wired to a Raspberry Pi Pico on a breadboard.
LAN8720 breakout wired to a Raspberry Pi Pico on a breadboard (with reset button)

Then wire up the breakout board to your Raspberry Pi Pico. Most of these boards seem to be well labelled, with the left-hand labels corresponding to the top row of breakout pins. The mapping between the pins on the RMII-based LAN8720 breakout board and your Pico should be as follows:

Pico RP20401 LAN8720 Breakout
Pin 9 GP6 RX0
Pin 10 GP7 RX1 (RX0 + 1 )
Pin 11 GP8 CRS (RX0 + 2)
Pin 14 GP10 TX0
Pin 15 GP11 TX1 (TX0 + 1)
Pin 16 GP12 TX-EN (TX0 + 2)
Pin 19 GP14 MDIO
Pin 20 GP15 MDC
Pin 26 GP20 nINT / RETCLK
3V3 (OUT) VCC
Pin 38 GND GND
Mapping between physical pin number, RP2040 pin, and LAN8720 breakout

1 These pins are the library default and can be changed in software.

Once you’ve wired things up, plug your Pico into Ethernet and also via USB into your Raspberry Pi or laptop. As well as powering your Pico you’ll be able to see some debugging information via USB Serial. Open a Terminal window and start minicom.

$ minicom -D /dev/ttyACM0

If you’re having problems, see Chapter 4 of our Getting Started guide for more information.

Hopefully, so long as your router is handing out IP addresses, you should see something like this in the minicom window, showing that your Pico has grabbed an IP address using DHCP:

pico rmii ethernet - httpd                              
netif status changed 0.0.0.0                            
netif link status changed up                            
netif status changed 192.168.1.110

If you open up a browser window and type the IP address that your router has assigned to your Pico into the address bar, if everything goes well you should see the default lwIP index page:

Viewing the web page served from our Raspberry Pi Pico.

Congratulations. Your Pico is now a web server.

Changing the web pages

It turns out to be pretty easy to change the web pages served by Pico. You can find the “file system” with the default lwIP pages inside the HTTP application in the lwIP Git submodule.

$ cd pico-rmii-ethernet/lib/lwip/src/apps/http/fs
$ ls 
404.html   img/        index.html
$ 

You should modify the index.html file in situ here with your favourite editor. Afterwards we’ll need to move the file system directory into place, and then we can repackage it up using the associated makefsdata script.

$ cd ..
$ mv fs makefsdata 
$ cd makefsdata
$ perl makefsdata

Running this script will create an fsdata.c file in the current directory. You need to move this file up to the parent directory and then rebuild the UF2 file.

$ mv fsdata.c ..
$ cd ../../../../../..
$ rm -rf build
$ mkdir build
$ cd build
$ cmake ..
$ make

If everything goes well you should have a new UF2 file in build/examples/httpd called pico_rmii_ethernet_httpd.uf2 , and you can again load this UF2 file onto your Pico as before.

The updated web page served from our Raspberry Pi Pico.
The updated web page served from our Raspberry Pi Pico

On restart, wait till your Pico grabs an IP address again and then, opening up a browser window again and typing the IP address assigned to your Pico into the address bar, you should now see an updated web page.

You can go back and edit the page served from your Pico, and build an entire site. Remember that you’ll need to rebuild the fsdata.c file each time before your rebuild your UF2.

Current limitations

There are a couple of limitations on the current implementation. The RP2040 is running underclocked to just 50MHz using the RMII modules’ reference clock, while the lwIP stack is compiled with NO_SYS so neither the Netcon API nor the Socket API is enabled. Finally, link speed is set to 10 Mbps as there is currently an issue with TX at 100 Mbps.

Where next?

While the example Sandeep put together used the lwIP web server, there are a number of other library application examples we can grab and twist to our own ends, including TFTP and MQTT example applications. Beyond that, lwIP is a TCP/IP stack. Anything you can do over TCP you can now do from your Pico.

Wrapping up

Support for developing for Pico can be found on the Raspberry Pi forums. There is also an (unofficial) Discord server where a lot of people active in the new community seem to be hanging out. Feedback on the documentation should be posted as an Issue to the pico-feedback repository on GitHub, or directly to the relevant repository it concerns.

All of the documentation, along with lots of other help and links, can be found on the Getting Started page. If you lose track of where that is in the future, you can always find it from your Pico: to access the page, just press and hold the BOOTSEL button on your Pico, plug it into your laptop or Raspberry Pi, then release the button. Go ahead and open the RPI-RP2 volume, and then click on the INDEX.HTM file.

That will always take you to the Getting Started page.

The post How to add Ethernet to Raspberry Pi Pico appeared first on Raspberry Pi.

Automate analogue film scanning with Raspberry Pi and LEGO

Par Ashley Whittaker

This automated analogue film scanner runs on a Raspberry Pi and LEGO bricks. BenjBez took to Reddit to share this incredible lockdown project, which makes processing film photographs easier.

Video by Benjamin Bezine

Benj explains:

“When doing analog photography, scanning is the most painful part – RoboScan tries to make the whole workflow easier, from the film to the final image file.”

Mesmerising, isn’t it? We don’t know why we want it, we just do. We love it when new technology supports traditional methods with hacks like this. It reminded us of this Raspberry Pi powered e-paper display that takes months to show a movie.

How does it work?

a 3 D rendering of the LEGO parts used to make the scanner
A 3D rendering of the LEGO parts used to make the scanner, from Mecabricks

The film roll is fed through the LEGO frame and lit by an integrated LED backlight. Machine learning detects when a photo is correctly framed and ready for scanning, then a digital camera takes another photo of it. RoboScan downloads the photos from your digital camera as soon as they are taken. Only 80 photos were used to train the Raspberry Pi and Benj has shared the model here.

This is what the machine learning sees. In purple are the tentative complete frames

But I only take digital photos anyway…

Most of us rely on our phones these days to capture special moments. However, we bet loads of you have relatives with albums full of precious photos they would hate to lose; maybe you could digitise the negatives for safekeeping using this method?

Benj is still working on his creation, sharing this updated version a few months ago

Best of all – it’s all open source and available on GitHub.

Thanks, Electromaker!

Skip to 16 mins 37 seconds to watch electromaker’s take on this project

We love our lovely friends at Electromaker and we found this project through them. (They found it on Reddit.) They release a new video every week, so make sure to subscribe on YouTube so you don’t miss out.

The post Automate analogue film scanning with Raspberry Pi and LEGO appeared first on Raspberry Pi.

Expanding our free Isaac Computer Science platform with new GCSE content

Par Duncan Maidens

We are delighted to announce that we’re expanding our free Isaac Computer Science online learning platform in response to overwhelming demand from teachers and students for us to cover GCSE content.

Woman teacher and female students at a computer.

Thanks to our contract with England’s Department for Education which is funding our work as part of the National Centre for Computing Education (NCCE) consortium, we’ve been able to collaborate with the University of Cambridge’s Department of Computer Science and Technology to build the Isaac Computer Science platform, and to create an events programme, for A level students and teachers. Now we will use this existing funding to also provide content and events for learning and teaching GCSE computer science.

Building on our success

With content designed by our expert team of computer science teachers and researchers, the Isaac Computer Science platform is already being used by 2000 teachers and 18,000 students at A level. The platform houses a rich set of interactive study materials and reflective questions, providing full coverage of exam specifications. 

Within the Teach Computing Curriculum we built as part of our NCCE work, we’ve already created free classroom resources to support teachers with the delivery of GCSE computer science (as well as the rest of the English computing curriculum from Key Stages 1 to 4). Expanding the Isaac Computer Science platform to offer interactive learning content to GCSE students, and running events specifically for GCSE students, will perfectly complement the Teach Computing Curriculum and support learners to continue their computing education beyond GCSE.

One male student and two female students in their teens work at a computer.

We’ll use our tried and tested process of content design, implementation of student and teacher feedback, and continual improvements based on evidence from platform usage data, to produce an educational offering for GCSE computer science that is of the highest quality.

What will Isaac Computer Science GCSE cover?

Isaac Computer Science GCSE will support students and teachers of GCSE computer science across the OCR, AQA, Pearson Edexcel, Eduqas, and WJEC exam bodies, covering the whole of the national curriculum. The content will be aimed at ages 14 to 16, and it will be suitable for students of all experience levels and backgrounds — from those who have studied little computer science at Key Stage 3 and are simply interested, to those who are already set to pursue a career related to computer science.

Benefits for students and teachers

Students will be able to:

  • Use the platform for structured, self-paced study and progress tracking
  • Prepare for their GCSE examinations according to their exam body
  • Get instant feedback from the interactive questions to guide further study
  • Explore areas of interest more deeply

Teachers will be able to:

  • Use the content and examples on the platform as the basis for classroom work
  • Direct their students to topics to read as homework
  • Set self-marking questions as homework or in the classroom as formative assessment to identify areas where additional support is required and track students’ progress

Free events for learning, training, and inspiration

As part of Isaac Computer Science GCSE, we’ll also organise an events programme for GCSE students to get support with specific topics, as well as inspiration about opportunities to continue their computer science education beyond GCSE into A level and higher education or employment.

Male teacher and male students at a computer

For teachers, we’ll continue to provide a wide spectrum of free CPD training events and courses through the National Centre for Computing Education.

Accessible all over the world

As is the case for the Isaac Computer Science A level content, we’ll create content for this project to suit the English national curriculum and exam bodies. However, anyone anywhere in the world will be able to access and use the platform for free. The content will be published under an Open Government License v3.0.

When does Isaac Computer Science GCSE launch, and can I get involved now?

Our launch will be in January of 2022, with the full suite of content available by September of 2022.

We’ll be putting out calls to the teaching community in England, asking for your help to guide the design and quality assurance of the Isaac Computer Science GCSE materials.

Follow Isaac Computer Science on social media and sign up on the Isaac Computer Science platform to be the first to hear news!

The post Expanding our free Isaac Computer Science platform with new GCSE content appeared first on Raspberry Pi.

Raspberry Pi Imager update to v1.6

Par Gordon Hollingworth

Since Raspberry Pi Imager was released just over a year ago, we’ve made a number of changes and fixes to help make it more reliable and easier to use.

But you may wonder whether it’s changed at all, because it looks almost exactly the same as it did last year. That’s not a coincidence — we’ve deliberately kept it as simple and straightforward as we can.

Raspberry Pi Imager

Our mission in designing and developing Imager was to make it as easy to use as possible, with the smallest possible number of clicks. This reduces complexity for the user and reduces the scope for users to make mistakes. However, at the same time, some of our users were asking for more complex functionality. This presented me with a tricky problem: how could we support advanced functionality, while also making it easy to use and hard to get wrong?

After much wrangling in GitHub issues, I finally folded, and decided to introduce an advanced options menu.

For those you adventurous enough to want to play with the advanced options, you need to press the magic key sequence:

‘Ctrl-Shift-X’

Using the advanced options menu obviously involves a few extra clicks, but it’s actually pretty simple, and it’s worth a look if you find you frequently need to make config changes after you flash a new SD card. It allows you to set some common options (for example, if you set the hostname correctly you don’t need to have a static IP address), and you can either save these for future images or use them for this session only.

If you’d like to turn off telemetry, that’s fine; all it does is send a ping to the Raspberry Pi website that lets us create the statistics pages here. To understand what we send, you can read about it on our GitHub page.

Try Raspberry Pi Imager today

Raspberry Pi Imager is available for Windows, macOS, Ubuntu for x86, and Raspberry Pi OS. Download options are available on our Downloads page, or you can use sudo apt install rpi-imager in a Terminal window to install it on a Raspberry Pi.

Once installed, simply follow the on-screen instructions and you’re good to go. Here’s a handy video to show just how easy it is to prepare your SD card.

The post Raspberry Pi Imager update to v1.6 appeared first on Raspberry Pi.

Supercomputing with Raspberry Pi | HackSpace 41

Par Andrew Gregory

Although it’s a very flexible term, supercomputing generally refers to the idea of running multiple computers as one, dividing up the work between them so that they process in parallel.

In theory, every time you double the amount of processing power available, you half the time needed to complete your task. This concept of ‘clusters’ of computers has been implemented heavily in large data processing operations, including intensive graphics work such as Pixar’s famous ‘render farm’. Normally the domain of large organisations, supercomputing is now in the hands of the masses in the form of education projects and makes from the cluster-curious, but there have also been some impressive real-world applications. Here, we’ll look at some amazing projects and get you started on your own supercomputing adventure.

OctaPi

One of the first high-profile cluster projects surprisingly came from the boffins at GCHQ (Government Communications Headquarters) in the UK. Created as part of their educational outreach programme, the OctaPi used eight Raspberry Pi 3B computers to create a cluster. Kits were loaned out to schools with multiple coding projects to engage young minds. The first demonstrated how supercomputing could speed up difficult equations by calculating pi. A more advanced, and very appropriate, task showed how these eight machines could work together to crack a wartime Enigma code in a fraction of the time it would have taken Bletchley Park.

Turing Pi

As we’ve already said, most Raspberry Pi cluster projects are for education or fun, but there are those who take it seriously. The Raspberry Pi Compute Module form factor is perfect for building industrial-grade supercomputers, and that’s exactly what Turing Pi has done. Their custom Turing Pi 1 PCB can accept up to seven Raspberry Pi 3+ Compute Modules and takes care of networking, power, and USB connectivity. Although claiming a wide range of uses, it appears to have found a niche in the Kubernetes world, being a surprisingly powerful device for its price. Future plans have been announced for the Turing Pi 2, based on the more powerful Raspberry Pi 4.

Water-Cooled Cluster

Multiple machines are one thing, but there’s also the individual speed of those machines. The faster they go, the faster the cluster operates exponentially. Overclocking is common in supercomputing, and that means heat. This water-cooled cluster, which maker Michael Klements freely admits is one of those ‘just because’ undertakings, uses the kind of water cooling usually found on high-end gaming PCs and applies it to a Raspberry Pi cluster. This beautiful build, complete with laser-cut mounts and elegant wiring, has been extensively documented by Klements in his blog posts. We can’t wait to see what he does with it!

Oracle Supercomputer

So how far can we take this? Who has built the largest Raspberry Pi cluster? A strong contender seems to be Oracle, who showed off their efforts at Oracle OpenWorld in 2019. No fewer than 1060 Raspberry Pi 3B+ computers were used in its construction (that’s 4240 cores). Why 1060? That’s as much as they could physically fit in the frame! The creation has no particular purpose bar a demonstration of what is possible in a small space, cramming in several network switches, arrays of USB power supplies, and a NAS (network-attached storage) for boot images.


Make your own

We’re thinking you probably don’t fancy trying to beat Oracle’s record on your first attempt, and would like to start with something a bit simpler. Our sister magazine, The MagPi, has published a cluster project you can make at home with any number of Raspberry Pi devices (although just one might be a little pointless). In this case, four Raspberry Pi 4B computers were assigned the job of searching for prime numbers. Each is assigned a different starting number, and then each adds four before testing again. This is handled by an open-source cluster manager, MPI (Message Passing Interface). A solid introduction to what is possible.

Issue 41 of HackSpace magazine is on sale NOW!

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store or your local newsagents. As always, every issue is free to download from the HackSpace magazine website.

The post Supercomputing with Raspberry Pi | HackSpace 41 appeared first on Raspberry Pi.

Raspberry Pi auto-uploads camera images with PiPhoto

Par Ashley Whittaker

Picture the scene – you’ve just returned from an amazing trip armed with hundreds of photos. You don’t want to lose those memories. However, you also don’t want to spend the next three hours uploading and organising them.

An automated solution

Lou Kratz spends pretty much every weekend capturing his adventures on camera. But he couldn’t stand the digital admin, so he invented PiPhoto to automate the process.

Video from Lou’s YouTube channel

As you can see from the video, Lou has created a wonderfully simple solution. You just plug your SD card into your Raspberry Pi, and your photos automatically upload onto your computer. Game changer.

What does PiPhoto do?

  • Mount the SD card on insert
  • Start flashing the green LED
  • Execute a sync command of your choosing
  • Make the green LED solid when the command completes
  • Make the red LED flash if the sync command fails

Can I build one myself?

Yes! Lou is our most favourite kind of maker in that he has open-sourced everything on GitHub. There are also step-by-step instructions on Lou’s blog.

PiPhoto cycling through on Raspberry Pi

You can easily change the sync command to better fit your needs, and Lou has already made some improvements. Here is a guide to making your Raspberry Pi organise photos by date as they’re uploaded. You can keep up with any new additions via Lou’s GitHub.

Now we don’t have to ditch our beloved older cameras for newer models with wireless connectivity built in. Thanks Lou!

The post Raspberry Pi auto-uploads camera images with PiPhoto appeared first on Raspberry Pi.

Interactive origami art with Raspberry Pi

Par Ashley Whittaker

Ross Symons is an incredible origami artist who harnessed Raspberry Pi and Bare Conductive’s Pi Cap board to bring his traditional paper creations to life in an interactive installation piece.

Video by White On Rice

Touchy-feely

The Pi Cap is “[a]n easy-to-attach add-on board that brings capacitive sensing to your Raspberry Pi projects.” Capacitive sensing is how touchscreens on your phone and tablet work: basically, the Pi Cap lets the Raspberry Pi know when something – in this case, an origami flower – is being touched.

Lovely photo from Bare Conductive

Aaaand relax

Ross named his creation “Wonder Wall – an Origami Meditation Mural”. Visitors put on headphones next to the origami flower wall, and listen to different soothing sounds as the Pi Cap senses that one of the green flowers is being touched.

The Raspberry Pi runs code from Python library PyGame to achieve the sound effects.

Green and white Origami flowers
Origami flowers ready for the installation. Photo from Bare Conductive

Electric paint

64 origami flowers were mounted to a canvas, a much lighter and more readiliy transportable option than a big wooden board.

On the back of the board, the Pi Cap and Raspberry Pi connect to each origami flower with electric paint and copper tape. The electric paint “solders” the copper tape to the Pi Cap, and also allows for connections around corners.

Drop a comment below if you’ve ever used electric paint in a project.

Pi Cap board and electric paint
The Pi Cap board connects to origami flowers with electric paint (being applied from the white tube) and copper tape. Photo from Bare Conductive

Insta-cutie

Check out Ross’s beautiful Instagram account @white_onrice. It’s full of incredible paper creations and inspired stop-motion animations. Our favourite is this little crane having a whale of a time.

Lastly, make sure to follow White On Rice on YouTube for more mesmerising origami art.

The post Interactive origami art with Raspberry Pi appeared first on Raspberry Pi.

How not to code: a guide to concise programming

Par Ryan Lambie

Updating a 22-year-old game brought Andrew Gillett face to face with some very poor coding practices. Read more about it in this brilliant guest article from the latest issue of Wireframe magazine.

In 1998, at the age of 17, I was learning how to write games in C. My first attempt, the subtly titled DEATH, was not going well. The game was my take on Hardcore, a 1992 Atari ST game by legendary game developer and sheep enthusiast Jeff Minter, which had been released only as an unfinished five-level demo.

A series of ultrabombs blowing up a snake.

A series of ultrabombs blowing up a snake.

The player controlled four gun turrets on the outside of a square arena, into which enemies teleported. While the original game had been enjoyable and promising, my version wasn’t much fun, and I couldn’t work out why. Making a decent game would also have involved making dozens of levels and many enemy types, which was looking like too big a task, especially as I was finding it hard to understand the intricacies of how the enemies in Hardcore moved.

So I abandoned that game and decided to replicate a different one – 1994’s MasterBlaster, a Bomberman-style game on the Commodore Amiga. MasterBlaster didn’t have a single-player mode or bots, so there was no enemy AI to write. And the level was just a grid with randomly generated walls and power-ups – so there was no real level design involved. With those two hurdles removed, development went fairly smoothly, the biggest challenge being working out some of the subtleties of how character movement worked.

The 2021 version of Partition Sector
The 2021 version of Partition Sector

The game, which I named Partition Sector, was finished in mid-1999 and spent the next 18 years on my website being downloaded by very few people. In late 2018 I decided to do a quick update to the game and release it on Steam. Then I started having ideas, and ended up working on it, on and off, for two years.

One of the biggest hurdles I came across when writing my first games was how to structure the code. I knew how to write a basic game loop, in which you update the positions of objects within the game, then draw the level and the objects within it, and then loop back to the start, ending the loop when the ‘game over’ criteria are met or the player has chosen to quit. But for a full game you need things like a main menu, submenus, going through a particular number of rounds before returning to the main menu, and so on. In the end, I was able to come up with something that worked, but looking back on my old code 20 years on, I could see many cases of absolutely terrible practice.

“I started having ideas, and ended up working on it, on

and off, for two years”

While most of my time was spent adding new features, a lot of time was spent rewriting and restructuring old code. I’m going to share some examples from the original code so you don’t make the same mistakes!

This is just a snippet of Andrew’s brilliant monster-sized tutorial, which you can read in full in the latest issue of Wireframe magazine. No subscription? No problem! You can read the rest of this post in full for free in PDF format.

Wireframe issue 48
You can read more features like this one in Wireframe issue 48, available directly from Raspberry Pi Press — we deliver worldwide.

The post How not to code: a guide to concise programming appeared first on Raspberry Pi.

Low-cost Raspberry Pi Zero endoscope camera

Par Ashley Whittaker

Researchers at the University of Cape Town set about developing an affordable wireless endoscope camera to rival expensive, less agile options.

Endoscopic cameras are used to look at organs inside your body. A long, thin, flexible tube with a light at the end is fed down your throat (for example), and an inside view of all your organs is transmitted to a screen for medical review.

Problem is, these things are expensive to build. Also, the operator is tethered by camera wires and power cables.

Low cost endoscope camera
The prototype featured in Lazarus & Ncube research paper

With this low-cost prototype, the camera is mounted at the end with LEDs instead of fibre-optic lights. The device is battery powered, and can perform for two hours without needing a charge. Traditional endoscopes require external camera cables and a hefty monitor, so this wireless option saves space and provides much more freedom. Weighing in at just 184g, it’s also much more portable.

The prototype incorporates a 1280 × 720 pixel high-definition tube camera, and transmits video to a standard laptop for display. Perhaps this idea could be developed to support an even more agile display, such as a phone or a touchscreen tablet.

Thousands of dollars cheaper

This Raspberry Pi-powered wireless option also saves thousands of dollars. It was built for just $230, whereas contemporary wired options cost around $28,000.

Urologists at the University of Cape Town created the prototype. J. M. Lazarus & M. Ncube hope their design will be more accessible to medical settings that have less money available. You can read their research paper for an in-depth look at the whole process.

Traditional endescope camera cross section
A traditional endoscope. Image from Lazarus & Ncube’s original paper

The researchers focused on open-source resources to keep the cost low; we’ll learn more about the RaspAP software they used below. Affordability also led them to Raspberry Pi Zero W which, at just $10, is able to handle high-definition video.

What is RaspAP?

Billz, who shared the project on reddit, is one of the developers of RaspAP.

RaspAP is a wireless setup and management system that lets you get a wireless access point up and running quickly on Raspberry Pi. Here, the Raspberry Pi is receiving images sent from the camera and transmitting them to a display device.

An example of a Rasp A P dashboard
An example of a RaspAP dashboard

There is also Quick installer available for RaspAP. It creates a default configuration that “just works” on all Raspberry Pis with onboard wireless.

We wonder what other medical equipment could be greatly improved by developing an affordable wireless version?

A banner with the words "Be a Pi Day donor today"

The post Low-cost Raspberry Pi Zero endoscope camera appeared first on Raspberry Pi.

Remotely monitor freezer temperatures with Raspberry Pi

Par Ashley Whittaker

Elizabeth from Git Tech’d has shown us how to monitor freezers and fridges remotely with a temperature sensor and Raspberry Pi. A real-time temperature monitor dashboard lets you keep an eye on things, and text message alerts can be set up to let you know when the temperature is rising.

The idea came about after Rick Kuhlman‘s wife lost a load of breast milk she had stored in the freezer. To make sure that months of hard work was never wasted again, Rick came up with this $30 solution.

Kit list

The whole kit packed together in a transparent case
Everything packed together in the protective case

Setup

Easy does it: you just wire the temperature sensor directly to your Raspberry Pi. Rick has even made you a nice wiring diagram, so no excuses:

Wiring diagram for connecting Raspberry Pi Zero W to Adafruit BME280

There’s a little fiddling to make sure your Flat Flex cable attaches properly to the temperature sensor. The project walkthrough provides a really clear, illustrated step-by-step to help you.

The protoboard for the BME280 has 7 solder points, but the cable has 8 connectors
The temperature sensor has seven solder points but the cable has eight connectors, so you’ll need to get snippy

Software

Everything looks pretty simple according to the installation walkthrough. A couple of Python libraries accessed via Raspberry Pi OS and you’re there.

Screenshot of the temperature monitor
Initial State’s temperature monitor dashboard

You’ll need an access key from Initial State, but Rick explains you can get a free trial. The real-time temperature monitor dashboard is hosted on your Initial State account. If you want to have a poke around one that’s already up and running, have a look at Rick’s dashboard.

Alert!

You can configure your own alert parameters from within the dashboard. Set your desired temperature and how much leeway you can tolerate.

You’ll get a text alert if the temperature falls too far above or below your personal setting.

A phone screen showing a text alert that a freezer temperature has gone too high
Get alerts straight to your phone

We can see this affordable fix helping out science labs that need to keep their expensive reagents cold but don’t have the budget for freezers with built-in monitoring, as well as people who need to keep medication at a certain temperature at home. Or maybe food outlets that don’t want to risk losing loads of pricy perishables stacked up in a chest freezer. Nice work, Rick and Elizabeth!

A banner with the words "Be a Pi Day donor today"

The post Remotely monitor freezer temperatures with Raspberry Pi appeared first on Raspberry Pi.

Celebrate Pi Day with us

Par Matt Richardson

Since launching our first-ever Pi Day fundraising campaign, we’ve been absolutely amazed by the generous support so many of you have shown for the young learners and creators in our community. Together, our Pi Day donors have stepped up to make an impact on over 20,000 learners (and counting!) who rely on the Raspberry Pi Foundation’s free digital making projects and online learning resources.

A young person using Raspberry Pi hardware and learning resources to do digital making

We need your help to keep the momentum going until 14 March, so that as many young people as possible gain the opportunity to develop new skills and get creative with computing. If you are able to contribute, there’s still time for you to join in with a gift of £3.14, £31.42, or perhaps even more.

We can’t thank you enough for your support, and as a way to show our gratitude, we offer you the option to see your name listed as a Pi Day donor in an upcoming issue of The MagPi magazine!

Join our live online Pi Day celebration

We’d also like to invite you to our virtual Pi Day celebration! This Sunday at 7pm GMT, we’ll host a special episode of Digital Making at Home, our weekly live stream for families and young digital makers. Eben will be on to share the story of Raspberry Pi, and of course we’ll be making something cool with Raspberry Pi and celebrating with all of you. Subscribe to the Foundation’s YouTube channel and turn on notifications to get a reminder about when we go live. 

A little help from our friends

Last but not least, we’d like to extend a big thank you to OKdo. They’re celebrating Pi Day with special deals throughout the weekend, and a generous 50% of those proceeds will be donated to the Raspberry Pi Foundation.

“We’re delighted to be supporting Raspberry Pi’s first ever Pi Day Campaign. Events like this are vital to aid our mutual mission to make technology accessible to young people all over the world. At OKdo we exist to spark a love of computing for children and help them to develop new skills so that they have every possible chance to fulfil their potential.”

Richard Curtin, OKdo’s SVP

We’re grateful to OKdo for championing our Pi Day campaign along with our friends at EPAM Systems and CanaKit

Happy Pi Day, and we can’t wait to celebrate with you this weekend!

The post Celebrate Pi Day with us appeared first on Raspberry Pi.

What is PIO?

Par Alex Bate

Microcontroller chips, like our own RP2040 on Raspberry Pi Pico, offer hardware support for protocols such as SPI and I2C. This allows them to send and receive data to and from supported peripherals.

But what happens when you want to use unsupported tech, or multiple SPI devices? That’s where Programmable I/O, or PIO, comes in. PIO was developed just for RP2040, and is unique to the chip.

PIO allows you to create additional hardware interfaces, or even new types of interface. If you’ve ever looked at the peripherals on a microcontroller and thought “I need four UARTs and I only have two,” or “I’d like to output DVI video,” or even “I need to communicate with this accursed serial device I found, but there is no hardware support anywhere,” then you will have fun with PIO.

We’ve put together this handy explainer to help you understand PIO and how it can be used to add more devices to your Raspberry Pi Pico.

For more information on PIO and RP2040, check out this article from HackSpace magazine.

The post What is PIO? appeared first on Raspberry Pi.

Engaging Black girls in STEM learning through game design

Par Sue Sentance

Today is International Women’s Day, giving us the perfect opportunity to highlight a research project focusing on Black girls learning computing.

Two black girls sitting against an outside wall while working on a laptop

Between January and July 2021, we’re partnering with the Royal Academy of Engineering to host speakers from the UK and USA to give a series of research seminars focused on diversity and inclusion. By diversity, we mean any dimension that can be used to differentiate groups and people from one another. This might be, for example, age, gender, socio-economic status, disability, ethnicity, religion, nationality, or sexuality. The aim of inclusion is to embrace all people irrespective of difference. In this blog post, I discuss the third research seminar in this series.

Dr Jakita O. Thomas
Dr Jakita O. Thomas

This month we were delighted to hear from Dr Jakita O. Thomas from Auburn University and BlackComputHer, who talked to us about a seven-year qualitative study she conducted with a group of Black girls learning game design. Jakita is an Associate Professor of Computer Science and Software Engineering at Auburn University in Alabama, and Director of the CUlturally and SOcially Relevant (CURSOR) Computing Lab.

The SCAT programme

The Supporting Computational Algorithmic Thinking (SCAT) programme started in 2013 and was originally funded for three years. It was a free enrichment programme exploring how Black middle-school girls develop computational algorithmic thinking skills over time in the context of game design. After three years the funding was extended, giving Jakita and her colleagues the opportunity to continue the intervention with the same group of girls from middle school through to high school graduation (7 years in total). 23 students were recruited onto the programme and retention was extremely high.

Dr Jakita Thomas presents a slide: "Problem context: Black women and girls are rarely construed as producers of computer science knowledge in US schools and society. Design, learning, identity and teaching are inextricably linked and should come together and promoto robust experiences for participation in a global world. Black girls in STEM+C environments are rarely served in such ways. Some scholars suggest that STEM is simply a neoliberal project. When we put that view in conversation with Black girls in and informal learning environment design to promote Black female excellence, a more nuanced and complex perspective emerges."
Click to enlarge

The SCAT programme ran throughout each academic year and also involved a summer camp element. The programme included three types of activities: the two-week summer camp, twelve monthly workshops, and field trips, all focused on game design. The instructors on the programme were all Black women, either with or working towards doctorates in computer science, serving as role models to the girls.

The theoretical basis of the programme drew on a combination of:

  • Cognitive apprenticeship, i.e. learning from others with expertise in a particular field
  • Black Feminist Thought (based on the work of Patricia Hill Collins) as a foundation for valuing Black girls’ knowledge and lived experience as expertise they bring to their learning environment
  • Intersectionality, i.e. considering the intersection of multiple characteristics, e.g. race and gender

This context highlights that interventions to increase diversity in STEM or computing tend to support mainly white girls or Black and other ethnic minority boys, marginalising Black girls.

Why game design?

Game design was selected as a topic because it is popular with all young people as consumers. According to research Jakita drew on, over 94% of girls in the US aged 12 to 17 play video games, with little differences relating to race or socioeconomic status. However, game design is an industry in which African American women are under-represented. Women represent only 10 to 12% of the game design workforce, and less than 5% of the workforce are African American or Latino people of any gender. Therefore Jakita and her colleagues saw it as an ideal domain to work in with the girls.

Dr Jakita Thomas presents a slide: Game design cycle: brainstorming, storyboarding, physical prototyping, design document, software prototyping, implementation, quality assurance / maintenance"
Click to enlarge

Another reason for selecting game design as a topic was that it gave the students (the programme calls them scholars) the opportunity to design and create their own artefacts. This allowed the participants to select topics for games that really mattered to them, which Jakita suggested might be related to their own identity, and issues of equity and social justice. This aligns completely with the thoughts expressed by the speakers at our February seminar.

What was learned through SCAT?

Jakita explained that her findings suggest that the ways in which the SCAT programme was intentionally designed to offer Black girls opportunities to radically shape their identities as producers, innovators and disruptors of deficit perspectives. Deficit perspectives are ones that include implicit assumptions that privilege the values, beliefs, and practices of one group over another. Deficit thinking was a theme in our February seminar with Prof Tia Madkins, Dr Nicol R Howard, and Shomari Jones, and it was interesting to hear more about this. 

Data sources of the project included analysis of online journal data and end of season questionnaires across the first three years of SCAT, which provided insights into the participants’ perceptions and feelings about their SCAT experience, their understanding of computational algorithmic thinking, their perceptions of themselves as game designers, and the application of concepts learned within SCAT to other areas of their lives outside of SCAT.

In the first three years of the programme, the number of participants who saw game design as a viable hobby went from 0% to 23% to 45%. Other analysis Jakita and her colleagues performed was qualitative and identified as one theme that the participants wanted to ‘find meaning and relevance in altruism’. The researchers found that the participants started to reflect on their own narrative and identity through the programme. One girl on the programme said:

“At the beginning of SCAT, I didn’t understand why I was there. Then I thought about what I was doing. I was an African American girl learning how to properly learn game design. As I grew over the years in game designing, I gained a strong liking. The SCAT program has gifted me with a new hobby that most women don’t have, and for that I am grateful.”

– SCAT scholar (participant)

Jakita explained that the girls on the programme had formed a sisterhood, in that they came to know each other well and formed a strong and supportive community. In addition, what I found remarkable was the long-term impact of this programme: 22 out of the 23 young women that took part in the programme are now enrolled on STEM degree courses.

Dr Jakita Thomas presents a slide: "Conclusions and points of discussion: STEM learning for whom and to what ends is a complex narrative when centering Black girls because of the intersectional politics of their histories and STEM education opportunities. SCAT serves as a counter-space for STEM learning. Black girls should be positioned as producers of knowledge in STEM. Black girls need to have not only opportunities to acquire and develop STEM skills, capabilities and practices, but they also need time to reflect on those opportunities and experiences and assess whether and how STEM connects to their own interests, goals and aspirations (at least 12 months). It is imperative that learning scientists think from an intersectional perspective when considering how to design STEM learning environments for Black girls."
Jakita’s final slide, stimulating a great Q&A session (click to enlarge)

What next?

Read the paper on which Jakita’s seminar was based, download the presentation slides, and watch the video recording:

This research intervention obviously represents a very small sample, as is often the case with rich, qualitative studies, but there is much we can learn from it, and still much more to be done. In the UK, we do not have any ongoing or previously published research studies that look at intersectionality and computing education, and conducting similar research would be valuable. Jakita and her colleagues worked in the non-formal space, providing opportunities outside the formal curriculum, but throughout the academic year. We need to understand better the affordances of non-formal and formal learning for supporting engagement of learners from underrepresented groups in computing, perhaps particularly in England, where a mandatory computing curriculum from age 5 has been in place since 2014.

Next up in our free series

This was our 14th research seminar! You can find all the related blog posts on this page.

Next we’ve got three online events coming up in quick succession! In our seminar on Tuesday 20 April at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we’ll welcome Maya Israel from the University of Florida, who will be talking about Universal Design for Learning and computing. On Monday 26 April, we will be hosting a panel session on gender balance in computing. And at the seminar on Tuesday 2 May, we will be hearing from Dr Cecily Morrison (Microsoft Research) about computing and learners with visual disabilities.

To join any of these free events, click below and sign up with your name and email address:

We’ll send you the link and instructions. See you there!

The post Engaging Black girls in STEM learning through game design appeared first on Raspberry Pi.

Raspberry Pi thermal camera

Par Ashley Whittaker

It has been a cold winter for Tom Shaffner, and since he is working from home and leaving the heating on all day, he decided it was finally time to see where his house’s insulation could be improved.

camera attached to raspberry pi in a case
Tom’s setup inside a case with a cooling fan; the camera is taped on bottom right

An affordable solution

His first thought was to get a thermal IR (infrared) camera, but he found the price hasn’t yet come down as much as he’d hoped. They range from several thousand dollars down to a few hundred, with a $50 option just to rent one from a hardware store for 24 hours.

When he saw the $50 option, he realised he could just buy the $60 (£54) MLX90640 Thermal Camera from Pimoroni and attach it to a Raspberry Pi. Tom used a Raspberry Pi 4 for this project. Problem affordably solved.

A joint open source effort

Once Tom’s hardware arrived, he took advantage of the opportunity to combine elements of several other projects that had caught his eye into a single, consolidated Python library that can be downloaded via pip and run both locally and as a web server. Tom thanks Валерий КурышевJoshua Hrisko, and Adrian Rosebrock for their work, on which this solution was partly based.

heat map image showing laptop and computer screen in red with surroundings in bluw
The heat image on the right shows that Tom’s computer and laptop screens are the hottest parts of the room

Tom has also published everything on GitHub for further open source development by any enterprising individuals who are interested in taking this even further.

Quality images

The big question, though, was whether the image quality would be good enough to be of real use. A few years back, the best cheap thermal IR camera had only an 8×8 resolution – not great. The magic of the MLX90640 Thermal Camera is that for the same price the resolution jumps to 24×32, giving each frame 768 different temperature readings.

heat map image showing window in blue and lamp in red
Thermal image showing heat generated by a ceiling lamp but lost through windows

Add a bit of interpolation and image enlargement and the end result gets the job done nicely. Stream the video over your local wireless network, and you can hold the camera in one hand and your phone in the other to use as a screen.

Bonus security feature

Bonus: If you leave the web server running when you’re finished thermal imaging, you’ve got yourself an affordable infrared security camera.

video showing the thermal camera cycling through interpolation and color modes and varying view
Live camera cycling through interpolation and colour modes and varying view

Documentation on the setup, installation, and results are all available on Tom’s GitHub, along with more pictures of what you can expect.

And you can connect with Tom on LinkedIn if you’d like to learn more about this “technically savvy mathematical modeller”.

The post Raspberry Pi thermal camera appeared first on Raspberry Pi.

Swing into action with an homage to Pitfall! | Wireframe #48

Par Ryan Lambie

Grab onto ropes and swing across chasms in our Python rendition of an Atari 2600 classic. Mark Vanstone has the code

Whether it was because of the design brilliance of the game itself or because Raiders of the Lost Ark had just hit the box office, Pitfall Harry became a popular character on the Atari 2600 in 1982.

His hazardous attempts to collect treasure struck a chord with eighties gamers, and saw Pitfall!, released by Activision, sell over four million copies. A sequel, Pitfall II: The Lost Caverns quickly followed the next year, and the game was ported to several other systems, even making its way to smartphones and tablets in the 21st century.

Pitfall

Designed by David Crane, Pitfall! was released for the Atari 2600 and published by Activision in 1982

The game itself is a quest to find 32 items of treasure within a 20-minute time limit. There are a variety of hazards for Pitfall Harry to navigate around and over, including rolling logs, animals, and holes in the ground. Some of these holes can be jumped over, but some are too wide and have a convenient rope swinging from a tree to aid our explorer in getting to the other side of the screen. Harry must jump towards the rope as it moves towards him and then hang on as it swings him over the pit, releasing his grip at the other end to land safely back on firm ground.

For this code sample, we’ll concentrate on the rope swinging (and catching) mechanic. Using Pygame Zero, we can get our basic display set up quickly. In this case, we can split the background into three layers: the background, including the back of the pathway and the tree trunks, the treetops, and the front of the pathway. With these layers we can have a rope swinging with its pivot point behind the leaves of the trees, and, if Harry gets a jump wrong, it will look like he falls down the hole in the ground. The order in which we draw these to the screen is background, rope, tree-tops, Harry, and finally the front of the pathway.

Now, let’s get our rope swinging. We can create an Actor and anchor it to the centre and top of its bounding box. If we rotate it by changing the angle property of the Actor, then it will rotate at the top of the Actor rather than the mid-point. We can make the rope swing between -45 degrees and 45 degrees by increments of 1, but if we do this, we get a rather robotic sort of movement. To fix this, we add an ‘easing’ value which we can calculate using a square root to make the rope slow down as it reaches the extremes of the swing.

Our homage to the classic Pitfall! Atari game. Can you add some rolling logs and other hazards?

Our Harry character will need to be able to run backwards and forwards, so we’ll need a few frames of animation. There are several ways of coding this, but for now, we can take the x coordinate and work out which frame to display as the x value changes. If we have four frames of running animation, then we would use the %4 operator and value on the x coordinate to give us animation frames of 0, 1, 2, and 3. We use these frames for running to the right, and if he’s running to the left, we just mirror the images. We can check to see if Harry is on the ground or over the pit, and if he needs to be falling downward, we add to his y coordinate. If he’s jumping (by pressing the SPACE bar), we reduce his y coordinate.

We now need to check if Harry has reached the rope, so after a collision, we check to see if he’s connected with it, and if he has, we mark him as attached and then move him with the end of the rope until the player presses the SPACE bar and he can jump off at the other side. If he’s swung far enough, he should land safely and not fall down the pit. If he falls, then the player can have another go by pressing the SPACE bar to reset Harry back to the start.

That should get Pitfall Harry over one particular obstacle, but the original game had several other challenges to tackle – we’ll leave you to add those for yourselves.

Pitfall Python code

Here’s Mark’s code for a Pitfall!-style platformer. To get it working on your system, you’ll need to  install Pygame Zero.  And to download the full code and assets, head here.

Get your copy of Wireframe issue 48

You can read more features like this one in Wireframe issue 48, available directly from Raspberry Pi Press — we deliver worldwide.
Wireframe issue 48
And if you’d like a handy digital version of the magazine, you can also download issue 48 for free in PDF format.
A banner with the words "Be a Pi Day donor today"

The post Swing into action with an homage to Pitfall! | Wireframe #48 appeared first on Raspberry Pi.

Raspberry Pi Pico – Vertical innovation

Par James Adams

Our Chief Operating Officer and Hardware Lead James Adams talked to The MagPi Magazine about building Raspberry Pi’s first microcontroller platform.

On 21 January we launched the $4 Raspberry Pi Pico. As I write, we’ve taken orders for nearly a million units, and are working hard to ramp production of both the Pico board itself and the chip that powers it, the Raspberry Pi RP2040.

Close up of R P 20 40 chip embedded in a Pico board
RP2040 at the heart of Raspberry Pi Pico

Microcontrollers are a huge yet largely unseen part of our modern lives. They are the hidden computers running most home appliances, gadgets, and toys. Pico and RP2040 were born of our desire to do for microcontrollers what we had done for computing with the larger Raspberry Pi boards. We wanted to create an innovative yet radically low-cost platform that was easy to use, powerful, yet flexible.

It became obvious that to stand out from the crowd of existing products in this space and to hit our cost and performance goals, we would need to build our own chip.

I and many of the Raspberry Pi engineering team have been involved in chip design in past lives, yet it took a long time to build a functional chip team from scratch. As well as requiring specialist skills, you need a lot of expensive tools and IP; and before you can buy these things, there is a lot of work required to evaluate and decide exactly which expensive goodies you’ll need. After a slow start, for the past couple of years we’ve had a small team working on it full-time, with many others pulled in to help as needed.

Low-cost and flexible

The Pico board was designed alongside RP2040 – in fact we designed the RP2040 pinout to work well on Pico, so we could use an inexpensive two-layer PCB, without compromising on the layout. A lot of thought has gone into making it as low-cost and flexible as possible – from the power circuitry to packaging the units on to Tape and Reel (which is cost-effective and has good packing density, reducing shipping costs).

“This ‘full stack’ design approach has allowed optimisation across the different parts”

With Pico we’ve hit the ‘pocket money’ price point, yet in RP2040 we’ve managed to pack in enough CPU performance and RAM to run more heavyweight applications such as MicroPython, and AI workloads like TinyML. We’ve also added genuinely new and innovative features such as the Programmable I/O (PIO), which can be programmed to ‘bit-bang’ almost any digital interface without using valuable CPU cycles. Finally, we have released a polished C/C++ SDK, comprehensive documentation and some very cool demos!

A reel of Raspberry Pi Pico boards

For me, this project has been particularly special as I began my career at a small chip-design startup. This was a chance to start from a clean sheet and design silicon the way we wanted to, and to talk about how and why we’ve done it, and how it works.

Pico is also our most vertically integrated product; meaning we control everything from the chip through to finished boards. This ‘full stack’ design approach has allowed optimisation across the different parts, creating a more cost-effective and coherent whole (it’s no wonder we’re not the only fruit company doing this).

And of course, it is designed here in Cambridge, birthplace of so many chip companies and computing pioneers. We’re very pleased to be continuing the Silicon Fen tradition.

Get The MagPi 103 now

You can grab the brand-new issue right now online from the Raspberry Pi Press store, or via our app on Android or iOS. You can also pick it up from supermarkets and newsagents, but make sure you do so safely while following all your local guidelines.

magpi magazine cover issue 103

Finally, there’s also a free PDF you can download. Good luck during the #MonthOfMaking, folks! I’ll see y’all online.

A banner with the words "Be a Pi Day donor today"

The post Raspberry Pi Pico – Vertical innovation appeared first on Raspberry Pi.

Make an animated sign with Raspberry Pi Pico

Par Andrew Gregory

Light up your living room like Piccadilly Circus with this Raspberry Pi Pico project from the latest issue of HackSpace magazine. Don’t forget, it’s not too late to get your hands on our new microcontroller for FREE if you subscribe to HackSpace magazine.

HUB75 LED panels provide an affordable way to add graphical output to your projects. They were originally designed for large advertising displays (such as the ones made famous by Piccadilly Circus in London, and Times Square in New York). However, we can use a little chunk of these bright lights in our projects. They’re often given a ‘P’ value, such as P3 or P5 for the number of millimetres between the different RGB LEDs. These don’t affect the working or wiring in any way.

We used a 32×32 Adafruit screen. Other screens of this size may work, or may be wired differently. It should be possible to get screens of different sizes working, but you’ll have to dig through the code a little more to get it running properly.

The most cost- effective way to add 1024 RGB LEDs to your project

The most cost- effective way to add 1024 RGB LEDs to your project

The protocol for running these displays involves throwing large amounts of data down six different data lines. This lets you light up one portion of the display. You then switch to a different portion of the display and throw the data down the data lines again. When you’re not actively writing to a particular segment of the display, those LEDs are off.

There’s no in-built control over the brightness levels – each LED is either on or off. You can add some control over brightness by flicking pixels on and off for different amounts of time, but you have to manage this yourself. We won’t get into that in this tutorial, but if you’d like to investigate this, take a look at the box on ‘Going Further’.

The code for this is on GitHub (hsmag.cc/Hub75). If you spot a way of improving it, send us a pull request

The code for this is on GitHub. If you spot a way of improving it, send us a pull request

The first thing you need to do is wire up the screen. There are 16 connectors, and there are three different types of data sent – colour values, address values, and control values. You can wire this up in different ways, but we just used header wires to connect between a cable and a breadboard. See here for details of the connections.

These screens can draw a lot of power, so it’s best not to power them from your Pico’s 5V output. Instead, use a separate 5V supply which can output enough current. A 1A supply should be more than enough for this example. If you’re changing it, start with a small number of pixels lit up and use a multimeter to read the current.

With it wired up, the first thing to do is grab the code and run it. If everything’s working correctly, you should see the word Pico bounce up and down on the screen. It is a little sensitive to the wiring, so if you see some flickering, make sure that the wires are properly seated. You may want to just display the word ‘Pico’. If so, congratulations, you’re finished!

However, let’s take a look at how to customise the display. The first things you’ll need to adapt if you want to display different data are the text functions – there’s one of these for each letter in Pico. For example, the following draws a lower-case ‘i’:

def i_draw(init_x, init_y, r, g, b):
    for i in range(4):
        light_xy(init_x, init_y+i+2, r, g, b)
    light_xy(init_x, init_y, r, g, b)

As you can see, this uses the light_xy method to set a particular pixel a particular colour (r, g, and b can all be 0 or 1). You’ll also need your own draw method. The current one is as follows:

def draw_text():
    global text_y
    global direction
    global writing
    global current_rows
    global rows

    writing = True
    text_y = text_y + direction
    if text_y > 20: direction = -1
    if text_y < 5: direction = 1
    rows = [0]*num_rows
    #fill with black
    for j in range(num_rows):
    rows[j] = [0]*blocks_per_row

    p_draw(3, text_y-4, 1, 1, 1)
    i_draw(9, text_y, 1, 1, 0)
    c_draw(11, text_y, 0, 1, 1)
    o_draw(16, text_y, 1, 0, 1)
    writing = False

This sets the writing global variable to stop it drawing this frame if it’s still being updated, and then just scrolls the text_y variable between 5 and 20 to bounce the text up and down in the middle of the screen.

This method runs on the second core of Pico, so it can still throw out data constantly from the main processing core without it slowing down to draw images.

Get HackSpace magazine – Issue 40

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

When you subscribe, we’ll send you a Raspberry Pi Pico for FREE.

A banner with the words "Be a Pi Day donor today"

The post Make an animated sign with Raspberry Pi Pico appeared first on Raspberry Pi.

How your young people can create with tech for Coolest Projects 2021

Par Helen Drury

In our free Coolest Projects online showcase, we invite a worldwide community of young people to come together and celebrate what they’ve built with technology. For this year’s showcase, we’ve already got young tech creators from more than 35 countries registered, including from India, Ireland, UK, USA, Australia, Serbia, Japan, and Syria!

Two siblings presenting their digital making project at a Coolest Projects showcase

Register to become part of the global Coolest Projects community

Everyone up to age 18 can register for Coolest Projects to become part of this community with their own tech creation. We welcome all projects, all experience levels, and all kinds of projects, from the very first Scratch animation to a robot with machine learning capacity! The beauty of Coolest Projects is in the diversity of what the young tech creators make.

Young people can register projects in six categories: Hardware, Scratch, Mobile Apps, Websites, Games, and Advanced Programming. Projects need to be fully registered by Monday 3 May 2021, but they don’t need to be finished then — at Coolest Projects we celebrate works in progress just as much as finished creations!

To learn more about the registration process, watch the video below or read our guide on how to register.

Our Coolest Projects support for young people and you

Here are the different ways we’re supporting your young people — and you — with project creation!

Online resources for designing and creating projects

Download the free Coolest Projects workbook that walks young people through the whole creation process, from finding a topic or problem they want to address, to idea brainstorming, to testing their project:

The five steps you will carry out when creating a tech project: 1 Pick a problem. 2 Who are you helping with your project? 3 Generate ideas. 4 Design and build. 5 Test and tweak
Our Coolest Projects worksheets have detailed guidance about all five steps of project creation.

Explore more than 200 free, step-by-step project guides for learning coding and digital making skills that your young people can use to find help and inspiration! For more ideas on what your young people can make for Coolest Projects, have a look around last year’s online showcase gallery.

Live streams for young people

This Wednesday 3 March at 19:00 GMT / 14:00 ET, young people can join a special Digital Making at Home live stream about capturing ideas for projects. We’ll share practical tips and inspiration to help them get started with building a Coolest Projects creation:

On Tuesday 23 March, 16:00 GMT / 11:00 ET, young people can join the Coolest Projects team on a live stream to talk to them about all things Coolest Projects and ask all their questions! Subscribe to our YouTube channel and turn on notifications to be reminded about this live stream.

Online workshops for educators & parents

Join our free online workshops where you as an educator or parent can learn how to best support young people to take part:

Celebrating young people’s creativity

Getting creative with technology is truly empowering for young people, and anything your young people want to create will be celebrated by us and the whole Coolest Projects community. We’re so excited to see their projects, and we can’t wait to celebrate all together at our big live stream celebration event in June! Don’t let your young people miss their chance to be part of the fun.

Register your project for the Coolest Projects online showcase
A banner with the words "Be a Pi Day donor today"

The post How your young people can create with tech for Coolest Projects 2021 appeared first on Raspberry Pi.

Pi Day at the Raspberry Pi Foundation

Par Eben Upton

Pi Day is a special occasion for people all around the world (your preferred date format notwithstanding), and I love seeing all the ways that makers, students, and educators celebrate. This year at the Raspberry Pi Foundation, we’re embracing Pi Day as a time to support young learners and creators in our community. Today, we launch our first Pi Day fundraising campaign. From now until 14 March, I’d like to ask for your help to empower young people worldwide to learn computing and become confident, creative digital makers and engineers.

A boy using a Raspberry Pi desktop computer to code

Millions of learners use the Raspberry Pi Foundation’s online coding projects to develop new skills and get creative with technology. Your donation to the Pi Day campaign will support young people to access these high-quality online resources, which they need more urgently than ever amidst disruptions to schools and coding clubs. Did I mention that our online projects are offered completely free and in dozens of languages? That’s possible thanks to Raspberry Pi customers and donors who power our educational mission.

It’s not only young people who rely on the Raspberry Pi Foundation’s free online coding projects, but also teachers, educators, and volunteers in coding clubs:

“The project resources for Python and Scratch make it really easy for the children to learn programming and create projects successfully, even if they have limited prior experience — they are excellent.”

— Code Club educator in the UK

“The best thing […] is the accessibility to a variety of projects and ease of use for a variety of ages and needs. I love checking the site for what I may have missed and the next project my students can do!”

— Code Club educator in the USA
Two girls doing physical computing with Raspberry Pi

Your Pi Day gift will make double the impact thanks to our partner EPAM, who is generously matching all donations up to a total of $5000. As a special thanks to each of you who contributes, you’ll have the option to see your name listed in an upcoming issue of The MagPi magazine!

All young people deserve the opportunity to thrive in today’s technology-driven world. As a donor to the Raspberry Pi Foundation, you can make this a reality. Any amount you are able to give to our Pi Day campaign — whether it’s $3.14, $31.42, or even more — makes a difference. You also have the option to sign up as a monthly donor.

Let’s come together to give young people the tools they need to make things, solve problems, and shape their future using technology. Thank you.

A banner with the words "Be a Pi Day donor today"

PS Thanks again to EPAM for partnering with us to match your gifts up to $5000 until 14 March, and to CanaKit for their generous Pi Day contribution of $3141!

The post Pi Day at the Raspberry Pi Foundation appeared first on Raspberry Pi.

#MonthOfMaking is back in The MagPi 103!

Par Rob Zwetsloot

Hey folks, Rob from The MagPi here! I hope you’ve been doing well. Despite how it feels, a brand-new March is just around the corner. Here at The MagPi, we like to celebrate March with our annual #MonthOfMaking event, where we want to motivate you to get making.

A MonthOfMaking project: Someone wearing a wearable tech project featuring LEDs, a two-digit LED matrix, and a tablet screen. The person is high-fiving someone who is out of view.
You could make tech you can wear

But what should I make?

Making what? Anything you want. Flex your creative building skills with some programming, or circuity, or woodworking, metalwork, knitting, baking, photography, and whatever else you’ve been wanting to try out. Just make it, and share it with the hashtag #MonthOfMaking.

A MonthOfMaking project: a wildlife camera camouflaged in branches
You could make something to hide in nature while you capture… nature

In The MagPi 103 we have a big feature on alternative ways you can make — at least alternative to what we usually cover in the magazine. From sewing and embroidery to recycling and animation, we hope you’ll be inspired to try something new.

Try something new with Raspberry Pi Pico

I’ve got a few projects lined up myself, including some Raspberry Pi Pico stuff I’ve been mulling over.

A MonthOfMaking project: a homemade chandelier consisting of glass bottles and an LED ring
You could make a chandelier light fitting out of drinks bottles?!

Speaking of: we also show you some easy Raspberry Pi Pico projects to celebrate its recent release! You’ll discover all the ways you can get started with and learn more about Raspberry Pi’s first microcontroller.

All this and our usual selection of articles on weather maps, on-air lights, meme generators, hardware reviews, and much more is packed into issue 103!

A MonthOfMaking project: two Nintendo Game Boys, one of them hacked with two extra buttons and a colour display
Maybe you could tinker with some old tech

Get The MagPi 103 now

You can grab the brand-new issue right now online from the Raspberry Pi Press store, or via our app on Android or iOS. You can also pick it up from supermarkets and newsagents, but make sure you do so safely while following all your local guidelines.

magpi magazine cover issue 103

Finally, there’s also a free PDF you can download. Good luck during the #MonthOfMaking, folks! I’ll see y’all online.

The post #MonthOfMaking is back in The MagPi 103! appeared first on Raspberry Pi.

Universal design for learning in computing | Hello World #15

Par Hayley Leonard

In our brand-new issue of Hello World magazine, Hayley Leonard from our team gives a primer on how computing educators can apply the Universal Design for Learning framework in their lessons.

Cover of issue 15 of Hello World magazine

Universal Design for Learning (UDL) is a framework for considering how tools and resources can be used to reduce barriers and support all learners. Based on findings from neuroscience, it has been developed over the last 30 years by the Center for Applied Special Technology (CAST), a nonprofit education research and development organisation based in the US. UDL is currently used across the globe, with research showing it can be an efficient approach for designing flexible learning environments and accessible content.

A computing classroom populated by students with diverse genders and ethnicities

Engaging a wider range of learners is an important issue in computer science, which is often not chosen as an optional subject by girls and those from some minority ethnic groups. Researchers at the Creative Technology Research Lab in the US have been investigating how UDL principles can be applied to computer science, to improve learning and engagement for all students. They have adapted the UDL guidelines to a computer science education context and begun to explore how teachers use the framework in their own practice. The hope is that understanding and adapting how the subject is taught could help to increase the representation of all groups in computing.

The UDL guidelines help educators anticipate barriers to learning and plan activities to overcome them.

A scientific approach

The UDL framework is based on neuroscientific evidence which highlights how different areas or networks in the brain work together to process information during learning. Importantly, there is variation across individuals in how each of these networks functions and how they interact with each other. This means that a traditional approach to teaching, in which a main task is differentiated for certain students with special educational needs, may miss out on the variation in learning between all students across different tasks.

A stylised representation of the human brain
The UDL framework is based on neuroscientific evidence

The UDL guidelines highlight different opportunities to take learner differences into account when planning lessons. The framework is structured according to three main principles, which are directly related to three networks in the brain that play a central role in learning. It encourages educators to plan multiple, flexible methods of engagement in learning (affective networks), representation of the teaching materials (recognition networks), and opportunities for action and expression of what has been learnt (strategic networks).

The three principles of UDL are each expanded into guidelines and checkpoints that allow educators to identify the different methods of engagement, representation, and expression to be used in a particular lesson. Each principle is also broken down into activities that allow learners to access the learning goals, remain engaged and build on their learning, and begin to internalise the approaches to learning so that they are empowered for the future.

Examples of UDL guidelines for computer science education from the Creative Technology Research Lab

Multiple means of engagement Multiple means of representation Multiple means of
action and expression
Provide options for recruiting interests
* Give students choice (software, project, topic)
* Allow students to make projects relevant to culture and age
Provide options for perception
* Model computing through physical representations as well as through interactive whiteboard/videos etc.
* Select coding apps and websites that allow adjustment of visual settings (e.g. font size/contrast) and that are compatible with screen readers
Provide options for physical action
* Include CS unplugged activities that show physical relationships of abstract computing concepts
* Use assistive technology, including a larger or smaller mouse or touchscreen devices
Provide options for sustaining effort and persistence
* Utilise pair programming and group work with clearly defined roles
* Discuss the integral role of perseverance and problem-solving in computer science
Provide options for language, mathematical expressions, and symbols
* Teach and review computing vocabulary (e.g. code, animations, algorithms)
* Provide reference sheets with images of blocks, or with common syntax when using text
Provide options for expression and communication
* Provide sentence starters or checklists for communicating in order to collaborate, give feedback, and explain work
* Provide options that include starter code
Provide options for self-regulation
* Break up coding activities with opportunities for reflection, such as ‘turn and talk’ or written questions
* Model different strategies for dealing with frustration appropriately
Provide options for comprehension
* Encourage students to ask questions as comprehension checkpoints
* Use relevant analogies and make cross-curricular connections explicit
Provide options for executive function
* Embed prompts to stop and plan, test, or debug throughout a lesson or project
* Demonstrate debugging with think-alouds

Each principle of the UDL framework is associated with three areas of activity which may be considered when planning lessons or units of work. It will not be the case that each area of activity should be covered in every lesson, and some may prove more important in particular contexts than others. The full table and explanation can be found on the Creative Technology Research Lab website at ctrl.education.ufl.edu/projects/tactic.

Applying UDL to computer science education

While an advantage of UDL is that the principles can be applied across different subjects, it is important to think carefully about what activities to address these principles could look like in the case of computer science.

Maya Israel
Researcher Maya Israel will speak at our April seminar

Researchers at the Creative Technology Research Lab, led by Maya Israel, have identified key activities, some of which are presented in the table on the previous page. These guidelines will help educators anticipate potential barriers to learning and plan activities that can overcome them, or adapt activities from those in existing schemes of work, to help engage the widest possible range of students in the lesson.

UDL in the classroom

As well as suggesting approaches to applying UDL to computer science education, the research team at the Creative Technology Research Lab has also investigated how teachers are using UDL in practice. Israel and colleagues worked with four novice computer science teachers in US elementary schools to train them in the use of UDL and understand how they applied the framework in their teaching.

Smiling learners in a computing classroom

The research found that the teachers were most likely to include in their teaching multiple means of engagement, followed by multiple methods of representation. For example, they all offered choice in their students’ activities and provided materials in different formats (such as oral and visual presentations and demonstrations). They were less likely to provide multiple means of action and expression, and mainly addressed this principle through supporting students in planning work and checking their progress against their goals.

Although the study included only four teachers, it highlighted the flexibility of the UDL approach in catering for different needs within variable teaching contexts. More research will be needed in future, with larger samples, to understand how successful the approach is in helping a wide range of students to achieve good learning outcomes.

Find out more about using UDL

There are numerous resources designed to help teachers learn more about the UDL framework and how to apply it to teaching computing. The CAST website (helloworld.cc/cast) includes an explainer video and the detailed UDL guidelines. The Creative Technology Research Lab website has computing-specific ideas and lesson plans using UDL (helloworld.cc/udl).

Maya Israel will be presenting her research at our computing education research seminar series, on 20 April 2021. Our seminars are free to attend and open to anyone from anywhere around the world. Find out more about the current seminar series, which focuses on diversity and inclusion in computing education.

Further reading on UDL

Subscribe to Hello World for free

In issue 15 of Hello World, we hear from five teachers who have made the switch to computing from another subject. They tell us about the challenges they have faced, as well as the joys of teaching young people how to create new things with technology. All this and much, much more in the new issue!

Educators based in the UK can subscribe to receive print copies for free!

The post Universal design for learning in computing | Hello World #15 appeared first on Raspberry Pi.

How to get started with FUZIX on Raspberry Pi Pico

Par Alasdair Allan

FUZIX is an old-school Unix clone that was initially written for the 8-bit Zilog Z80 processor and released by Alan Cox in 2014. At one time one of the most active Linux developers, Cox stepped back from kernel development in 2013. While the initial announcement has been lost in the mists because he made it on the now defunct Google+, Cox jokingly recommended the system for those longing for the good old days when all the source code still fitted on a single floppy disk.

FUZIX running on Raspberry Pi Pico
FUZIX running on Raspberry Pi Pico.

Since then FUZIX has been ported to other architectures such as 6502, 68000, and the MSP430. Earlier in the week David Given — who wrote both the MSP430 and ESP8266 ports — went ahead and ported it to Raspberry Pi Pico and RP2040.

So you can now run Unix on a $4 microcontroller.

Building FUZIX from source

FUZIX is a “proper” Unix with a serial console on Pico’s UART0 and SD card support, using the card both for the filesystem and for swap space. While there is a binary image available, it’s easy enough to build from source.

If you don’t already have the Raspberry Pi Pico toolchain set up and working you should go ahead and set up the C/C++ SDK.

Afterwards you need grab the the Pico port from GitHub.

$ git clone https://github.com/davidgiven/FUZIX.git
$ cd FUZIX
$ git checkout rpipico

Then change directory to the platform port

$ cd Kernel/platform-rpipico/

and edit the first line of the Makefile to set the path to your pico-sdk.

So for instance if you’re building things on a Raspberry Pi and you’ve run the pico_setup.sh script, or followed the instructions in our Getting Started guide, you’d point the PICO_SDK_PATH to

export PICO_SDK_PATH = /home/pi/pico/pico-sdk

After that you can go ahead and build both the FUZIX UF2 file and the root filesystem.

$ make world -j
$ ./update-flash.sh

If everything goes well you should have a UF2 file in build/fuzix.uf2 and a filesystem.img image file in your current working directory.

You can now load the UF2 file onto your Pico in the normal way.

Go grab your Raspberry Pi Pico board and a micro USB cable. Plug the cable into your Raspberry Pi or laptop, then press and hold the BOOTSEL button on your Pico while you plug the other end of the micro USB cable into the board. Then release the button after the board is plugged in.

A disk volume called RPI-RP2 should pop up on your desktop. Double-click to open it, and then drag and drop the UF2 file into it.

The volume will automatically unmount, and your Pico is now running Unix. Unfortunately it won’t be much use without a filesystem.

Building a bootable SD card

The filesystem.img image file we built earlier isn’t a bootable image. Unlike the Raspberry Pi OS images you might be used to, you can’t just use something like Raspberry Pi Imager to write it to an SD card. We’re going to have to get our hands a bit dirtier than that.

The following instructions are for building your file system on a Raspberry Pi, or another similar Linux platform. Comparable tools are available on both MS Windows and Apple macOS, but the exact details will differ.

Go grab a microSD card. As the partitions we’re going to put onto it are only going to take up 34MB it doesn’t really matter what size you’ve got to hand. I was using a 4GB card, as that was the smallest I could find, but it’s not that important.

Now plug the card into a USB card reader and then into your Raspberry Pi or laptop computer. We’re going to have to build the partition table that FUZIX is expecting, which consists of two partitions: the first a 2MB swap partition, and the second a 32MB root partition into which we can copy the root filesystem, our filesystem.img file.

Raspberry Pi 4 with USB card reader
Raspberry Pi 4 with USB card reader.

After plugging your card into the reader you can find it from the command line using the lsblk command. If you’ve have a blank unformatted card it will be visible as /dev/sda.

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    1  3.7G  0 disk 
mmcblk0     179:0    0 14.9G  0 disk 
├─mmcblk0p1 179:1    0  256M  0 part /boot
└─mmcblk0p2 179:2    0 14.6G  0 part /
$

But if the card is already formatted you might instead see something like this

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    1  3.7G  0 disk 
└─sda1        8:1    1  3.7G  0 part /media/pi/USB
mmcblk0     179:0    0 14.9G  0 disk 
├─mmcblk0p1 179:1    0  256M  0 part /boot
└─mmcblk0p2 179:2    0 14.6G  0 part /
$

which is a FAT-formatted card with a MBR named “USB”, which your Raspberry Pi has automatically mounted under /media/pi/USB.

If your card has mounted, just go ahead and unmount it as follows:

$ umount /dev/sda1

Then looking using lsblk you should see

$ lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    1  3.7G  0 disk 
└─sda1        8:1    1  3.7G  0 part 
mmcblk0     179:0    0 14.9G  0 disk 
├─mmcblk0p1 179:1    0  256M  0 part /boot
└─mmcblk0p2 179:2    0 14.6G  0 part /
$

at which point we can delete the current partition table by zeroing out the first part of the card and deleting the “start of disk” structures.

$ sudo dd if=/dev/zero of=/dev/sda bs=512 count=1

If you run lsblk again afterwards you’ll see that the sda1 partition has been deleted.

Next we’ll use fdisk to create a new partition table. Type the following

$ sudo fdisk /dev/sda

to put you at the fdisk prompt. Then type “o” to create a new DOS disklabel

Command (m for help): o
Created a new DOS disklabel with disk identifier 0x6e8481a2.

followed by “n” to create a new partition:

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-7744511, default 2048): 2048
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-7744511, default 7744511): +2M 
Created a new partition 1 of type 'Linux' and of size 2 MiB.

Depending on the initial state of your disk you may be prompted that the partition “contains a vfat signature” and asked whether you want to remove the signature. If asked, just type “Y” to confirm.

Next, we’ll set the type for this partition to “7F

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 7F
Changed type of partition 'Linux' to 'unknown'.

to create the 2MB swap partition that FUZIX is expecting. From here we need to create a second 32MB partition to hold our root file system:

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (2-4, default 2): 2
First sector (6144-7744511, default 6144): 6144
Last sector, +/-sectors or +/-size{K,M,G,T,P} (6144-7744511, default 7744511): +32M

Created a new partition 2 of type 'Linux' and of size 32 MiB.

Afterwards if you type “p” at the fdisk prompt you should see something like this:

Command (m for help): p
Disk /dev/sda: 3.7 GiB, 3965190144 bytes, 7744512 sectors
Disk model: STORAGE DEVICE  
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe121b9a3

Device     Boot Start   End Sectors Size Id Type
/dev/sda1        2048  6143    4096   2M 7f unknown
/dev/sda2        6144 71679   65536  32M 83 Linux

If you do, you can type “w” to write and save the partition table.

Finally, we can copy our root file system into our second 32MB partition:

$ sudo dd if=filesystem.img of=/dev/sda2
65535+0 records in
65535+0 records out
33553920 bytes (34 MB, 32 MiB) copied, 14.1064 s, 2.4 MB/s
$

You can now eject the SD card from the USB card reader, because it’s time to wire up our breadboard.

Wiring things up on the breadboard

If you’re developing on a Raspberry Pi, and you haven’t previously used UART serial — which is different from the “normal” USB serial — you should go read Section 4.5 of our Getting Started guide.

FUZIX wiring diagram
Connecting a Raspberry Pi to a Pico and SD card.

Here I’m using Adafruit’s MicroSD Card Breakout, and wiring the UART serial connection directly to to the Raspberry Pi’s serial port using the GPIO headers.

However, if you’re developing on a laptop you can use something like the SparkFun FTDI Basic Breakout to connect the serial UART to your computer. Again, see our Getting Started guide for details: Section 9.1.4 if you’re on Apple macOS, or Section 9.2.5 if you’re on MS Windows.

Connecting a laptop to a Pico and SD card.

Either way, the mapping between the pins on your Raspberry Pi Pico and the SD card breakout is the same, and should be as follows:

Pico RP2040 SD Card
3V3 (OUT) +3.3V
Pin 16 GP12 (SPI1 RX) DO (MISO)
Pin 17 GP13 (SPI1 CSn) CS
Pin 18 GND GND
Pin 19 GP14 (SPI1 SCK) SCK
Pin 20 GP15 (SPI1 TX) DI (MOSI)
Mapping between physical pin number, RP2040 pin, and SD Card breakout.

Once you’ve wired things up, pop your formatted microSD card into the breadboarded SD card breakout, and plug your Raspberry Pi Pico into USB power. FUZIX will boot automatically.

Connecting to FUZIX

If you’re connecting using a Raspberry Pi, the first thing you’ll need to do is enable UART serial communications using raspi-config.

$ sudo raspi-config

Go to Interfacing Options → Serial. Select “No” when asked “Would you like a login shell to be accessible over serial?” and “Yes” when asked “Would you like the serial port hardware to be enabled?”

Enabling a serial UART using raspi-config on the Raspberry Pi.
Enabling a serial UART using raspi-config on Raspberry Pi.

Leaving raspi-config you should choose “Yes” and reboot your Raspberry Pi to enable the serial port. More information about connecting via UART can be found in Section 4.5 of our Getting Started guide.

You can then connect to FUZIX using minicom:

$ sudo apt install minicom
$ minicom -b 115200 -o -D /dev/serial0

Alternatively, if you are working on a laptop from macOS or MS Windows you can use minicom, screen, or your usual Terminal program. If you’re unsure what to use, there are a number of options: for instance, a good option is CoolTerm, which is cross-platform and works on Linux, macOS, and Windows.

After connecting to the serial port you should see something like this:

FUZIX in Serial
Connected to FUZIX using Serial on Apple macOS.

If you don’t see anything, just unplug and replug your Pico to reset it and start FUZIX running again.

Finally, go ahead and enter the correct date and time, and when you get to the login prompt you can login as “root” with no password.

Welcome to FUZIX!

Wrapping up

While there are still a few problems, the port of FUZIX to Pico has been merged to the upstream repository, which means it’s now an official part of the operating system.

Support for developing for Pico can be found on the Raspberry Pi forums. There is also an (unofficial) Discord server where a lot of people active in the new community seem to be hanging out. Feedback on the documentation should be posted as an Issue to the pico-feedback repository on GitHub, or directly to the relevant repository it concerns.

All of the documentation, along with lots of other help and links, can be found on the Getting Started page. If you lose track of where that is in the future, you can always find it from your Pico: to access the page, just press and hold the BOOTSEL button on your Pico, plug it into your laptop or Raspberry Pi, then release the button. Go ahead and open the RPI-RP2 volume, and then click on the INDEX.HTM file.

That will always take you to the Getting Started page.

The post How to get started with FUZIX on Raspberry Pi Pico appeared first on Raspberry Pi.

Closing the digital divide with Raspberry Pi computers

Par Philip Colligan

One of the harsh lessons we learned last year was that far too many young people still don’t have a computer for learning at home. There has always been a digital divide; the pandemic has just put it centre-stage. The good news is that the cost of solving this problem is now trivial compared to the cost of allowing it to persist.

A young person receives a Raspberry Pi kit to learn at home

Removing price as a barrier to anyone owning a computer was part of the founding mission of Raspberry Pi, which is why we so work hard to make sure that Raspberry Pi computers are as low-cost as possible for everyone, all of the time. We saw an incredible rise in the numbers of people — particularly young people — using Raspberry Pi computers as their main desktop PC during the lockdown, helped by the timely arrival of the fabulous Raspberry Pi 400.

Supporting the most vulnerable young people

As part of our response to the pandemic, the Raspberry Pi Foundation teamed up with UK Youth and a network of grassroots youth and community organisations to get Raspberry Pi desktop kits (with monitors, webcams, and headphones) into the hands of disadvantaged young people across the UK. These were young people who didn’t qualify for the government laptop scheme and who otherwise didn’t have a computer to learn at home.

A young person receives a Raspberry Pi kit to learn at home

This wasn’t just about shipping hardware (that’s the easy bit). We trained youth workers and teachers, and we worked closely with families to make sure that they could set up and use the computers. We did a huge amount of work to make sure that the educational platforms and apps they needed worked out of the box, and we provided a customised operating system image with free educational resources and enhanced parental controls.

A screenshot of a video call gallery with 23 participants
One of our training calls for the adults who will be supporting young people and families to use the Raspberry Pi kits

The impact has been immediate: young people engaging with learning; parents who reported positive changes in their children’s attitude and behaviour; youth and social workers who have deepened their relationship with families, enabling them to provide better support.

You can read more about the impact we’re having in the evaluation report for the first phases of the programme, which we published last week.

Thank you to our supporters

After a successful pilot programme generously funded by the Bloomfield Trust, we launched the Learn at Home fundraising campaign in December, inviting businesses and individuals to donate money to enable us to expand the programme. I am absolutely thrilled that more than 70 organisations and individuals have so far donated an incredible £900,000 and we are on track to deliver our 5000th Raspberry Pi kit in March.

Two young girls unpack a computer display
Thanks to Gillas Lane Primary Academy for collecting some wonderful photos and quotes illustrating the impact our computers are having!

While the pandemic shone a bright spotlight onto the digital divide, this isn’t just a problem while we are in lockdown. We’ve known for a long time that having a computer to learn at home can be transformational for any young person.

If you would like to get involved in helping us make sure that every young person has access to a computer to learn at home, we’d love to hear from you. Find out more details on our website, or email us at partners@raspberrypi.org.

The post Closing the digital divide with Raspberry Pi computers appeared first on Raspberry Pi.

Raspberry Pi makes LEGO minifigures play their own music

Par Ashley Whittaker

We shared Dennis Mellican’s overly effective anti-burglary project last month. Now he’s back with something a whole lot more musical and mini.

Inspiration

Dennis was inspired by other jukebox projects that use Raspberry Pi, NFC readers, and tags to make music play. Particularly this one by Mark Hank, which we shared on the blog last year. The video below shows Dennis’s first attempt at creating an NFC Raspberry Pi music player, similar to Mark’s.

LEGO twist

After some poking around, Dennis realised that the LEGO Dimensions toy pad is a three-in-one NFC reader with its own light show. He hooked it up to a Raspberry Pi and developed a Python application to play music when LEGO Dimension Minifigures are placed on the toy pad. So, if an Elvis minifigure is placed on the reader, you’ll hear Elvis’s music.

LEGO figures dressed as member of the band KISS
Mini KISS rocking out on the NFC reader

The Raspberry Pi is hooked up to the LEGO Dimensions toy pad, with Musicfig (Dennis’s name for his creation) playing tracks via Spotify over Bluetooth. The small screen behind the minifigures is displaying the Musicfig web application which, like the Spotify app, displays the album art for the track that’s currently playing. 

No Spotify or LEGO? No problem!

Daft Punk LEGO minifgures stood on an NFC reader next to a Raspberry Pi and a phone showing Daft Punk playing on Spotify
Daft Punk LOVES Raspberry Pi

Spotify playback is optional, as you can use your own MP3 music file collection instead. You also don’t have to use LEGO Minifigures: most NFC-enabled devices or tags can be used, including Disney Infinity, Nintendo Amiibo, and Skylander toy characters.

Mini figurines in the shape of various kids film charactera
Why not have Elsa sing… what’s that song again? Let it… what was it?

Dennis thought Musicfig could be a great marketable LEGO product for kids and grown-ups alike, and and he submitted it to the LEGO Ideas website. Unfortunately, he had tinkered a little too much (we approve) and it wasn’t accepted, due to rules that don’t allow non-LEGO parts or customisations.

Want to build one?

The LEGO Dimensions toy pad was discontinued in 2017, but Dennis has seen some sets on sale at a few department stores, and even more cheaply on second-hand market sites like Bricklink. We’ve spotted them on eBay and Amazon too. Dennis also advises that the toy pad often sells for less than a dedicated NFC reader.

A Tron mini figure on the reader with the Tron movie soundtrack seen playing on the screen behind it
What’s the best movie soundtrack and why is it Tron?

Watch Dennis’s seven-year-old son Benny show you how it all works, from Elvis through to Prodigy via Daft Punk and Queen.

You can tell which songs Benny likes best because the volume goes to 11

There are some really simple step-by-step instructions for a quick install here, as well as a larger gallery of Musicfig rigs. And Dennis hosts a more detailed walkthrough of the project, plus code examples, here.

You can find all things Dennis-related, including previous Raspberry Pi projects, here.

The post Raspberry Pi makes LEGO minifigures play their own music appeared first on Raspberry Pi.

NeoPixel fireflies jar with Raspberry Pi | HackSpace 40

Par Andrew Gregory

This twinkly tutorial is fresh from the latest issue of HackSpace magazine, out now.

Adding flashing lights to a project is a great way to make it a little more visually appealing, and WS2812B LEDs (sometimes known as NeoPixels) are a great way to do that.

They have their own mini communications protocol, so you can control lots of them with just a single pin on your microcontroller, and there’s a handy library for Pico MicroPython that lets you control them.

First, you need to grab the library from hsmag.cc/PicoPython and copy the PY file to your Pico device. You can do this by opening the file in Thonny and clicking Save As, and then selecting your MicroPython device and calling it ws2812b.py.

You create an object with the following parameters: number of LEDs, state machine ID, and GPIO number, in that order. So, to create a strip of ten LEDs on state machine 0 and GPIO 0, you use:

pixels = ws2812b.ws2812b(10,0,0)

This object has two methods: show() which sends the data to the strip, and set_pixel which sets the colour values for a particular LED. The parameters are LED number, red, green, blue, with the colours taking values between 0 and 255.

At the time of writing, there’s an issue using this library in the interpreter. The author is investigating, but it’s best to run it from saved files to ensure everything runs properly. Create a file with the following and run it:

import ws2812b
import time

pixels = ws2812b.ws2812b(10,0,0)
pixels.set_pixel(5,10,0,0)
pixels.show()
time.sleep(2)
pixels.set_pixel(5,0,10,0)
pixels.show()
time.sleep(2)
pixels.fill(0,0,10)
pixels.show()
time.sleep(2)

So, now we can light up some LEDs, let’s take a look at how to turn this into an interesting light fixture.

We originally created the fireflies example in the WS2812B project for Christmas tree lights, but once the festive season was over, we liked them so much that we wanted to keep them going year round. Obviously, we can’t just keep a tree up all the time, so we needed another way to display them. We’re using them on thin-wire WS2812B LEDs that are available from direct-from-China sellers, but they should work on other types of WS2812B-compatible LEDs.

There are some other methods in the WS2812B module, such as set_pixel_line_gradient() to add effects to your projects

For display, we’ve put the string of LEDs into a glass demijohn that we used to use for brewing, but any large glass jar would work. This gives an effect inspired by fireflies trapped in a jar. You can just download the code and run it (it’s in the examples folder in the above repository), but let’s take a look and see how it works. The first part of the code sets everything up:

import time
import ws2812b
import random

bright_div = 20
numpix = 50 # Number of NeoPixels
strip = ws2812b.ws2812b(numpix, 0,0)

colors = [
[232, 100, 255], # Purple
[200, 200, 20], # Yellow
[30, 200, 200], # Blue
[150,50,10],
[50,200,10],
]

max_len=20
min_len = 5

flashing = []

num_flashes = 10

You can change numpix, and the details for creating the WS2812B object, to whatever’s suitable for your setup. The colors array holds the different colours that you want your LEDs to flash (in red, green, blue format). You can add to these or change them. We like the subtle pastels of this palette, but you can make it bolder by having more pure colours.

The max_len and min_ len variables control the length of time each light flashes for. They’re not in any units (other than iterations of the main loop), so you may need a little trial and error to get settings that are pleasing for you. The remaining code is what actually does the work of flashing each LED:

for i in range(num_flashes):
pix = random.randint(0, numpix - 1)
col = random.randint(1, len(colors) - 1)
flash_len = random.randint(min_len, max_len)
flashing.append([pix, colors[col], flash_len, 0, 1])

strip.fill(0,0,0)

while True:
strip.show()
for i in range(num_flashes):

    pix = flashing[i][0]
    brightness = (flashing[i][3]/flashing[i][2])
    colr = (int(flashing[i][1][0]*brightness),
            int(flashing[i][1][1]*brightness),
            int(flashing[i][1][2]*brightness))
    strip.set_pixel(pix, colr[0], colr[1], colr[2])

    if flashing[i][2] == flashing[i][3]:
        flashing[i][4] = -1
    if flashing[i][3] == 0 and flashing[i][4] == -1:
        pix = random.randint(0, numpix - 1)
        col = random.randint(0, len(colors) - 1)
        flash_len = random.randint(min_len, max_len)
        flashing[i] = [pix, colors[col], flash_len, 0, 1]
    flashing[i][3] = flashing[i][3] + flashing[i][4]
    time.sleep(0.005)

The flashing list contains an entry for every LED that’s currently flashing. It stores the LED position colour, length of the flash, current position of the flash, and whether it’s getting brighter or dimmer. These are initially seeded with random data; then we start a loop that keeps updating the display.

That’s all there is to it. You can tweak this code or create your very own custom display.

Issue 40 of Hackspace Magazine is out NOW

Front cover of Hack space magazine featuring Pico on pink and black background

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

The post NeoPixel fireflies jar with Raspberry Pi | HackSpace 40 appeared first on Raspberry Pi.

Coding on Raspberry Pi remotely with Visual Studio Code

Par Ashley Whittaker

Jim Bennett from Microsoft, who showed you all how to get Visual Studio Code up and running on Raspberry Pi last week, is back to explain how to use VS Code for remote development on a headless Raspberry Pi.

Like a lot of Raspberry Pi users, I like to run my Raspberry Pi as a ‘headless’ device to control various electronics – such as a busy light to let my family know I’m in meetings, or my IoT powered ugly sweater.

The upside of headless is that my Raspberry Pi can be anywhere, not tied to a monitor, keyboard and mouse. The downside is programming and debugging it – do you plug your Raspberry Pi into a monitor and run the full Raspberry Pi OS desktop, or do you use Raspberry Pi OS Lite and try to program and debug over SSH using the command line? Or is there a better way?

Remote development with VS Code to the rescue

There is a better way – using Visual Studio Code remote development! Visual Studio Code, or VS Code, is a free, open source, developer’s text editor with a whole swathe of extensions to support you coding in multiple languages, and provide tools to support your development. I practically live day to day in VS Code: whether I’m writing blog posts, documentation or Python code, or programming microcontrollers, it’s my work ‘home’. You can run VS Code on Windows, macOS, and of course on a Raspberry Pi.

One of the extensions that helps here is the Remote SSH extension, part of a pack of remote development extensions. This extension allows you to connect to a remote device over SSH, and run VS Code as if you were running on that remote device. You see the remote file system, the VS Code terminal runs on the remote device, and you access the remote device’s hardware. When you are debugging, the debug session runs on the remote device, but VS Code runs on the host machine.

Photograph of Raspberry Pi 4
Raspberry Pi 4

For example – I can run VS Code on my MacBook Pro, and connect remotely to a Raspberry Pi 4 that is running headless. I can access the Raspberry Pi file system, run commands on a terminal connected to it, access whatever hardware my Raspberry Pi has, and debug on it.

Remote SSH needs a Raspberry Pi 3 or 4. It is not supported on older Raspberry Pis, or on Raspberry Pi Zero.

Set up remote development on Raspberry Pi

For remote development, your Raspberry Pi needs to be connected to your network either by ethernet or WiFi, and have SSH enabled. The Raspberry Pi documentation has a great article on setting up a headless Raspberry Pi if you don’t already know how to do this.

You also need to know either the IP address of the Raspberry Pi, or its hostname. If you don’t know how to do this, it is also covered in the Raspberry Pi documentation.

Connect to the Raspberry Pi from VS Code

Once the Raspberry Pi is set up, you can connect from VS Code on your Mac or PC.

First make sure you have VS Code installed. If not, you can install it from the VS Code downloads page.

From inside VS Code, you will need to install the Remote SSH extension. Select the Extensions tab from the sidebar menu, then search for Remote development. Select the Remote Development extension, and select the Install button.

Next you can connect to your Raspberry Pi. Launch the VS Code command palette using Ctrl+Shift+P on Linux or Windows, or Cmd+Shift+P on macOS. Search for and select Remote SSH: Connect current window to host (there’s also a connect to host option that will create a new window).

Enter the SSH connection details, using user@host. For the user, enter the Raspberry Pi username (the default is pi). For the host, enter the IP address of the Raspberry Pi, or the hostname. The hostname needs to end with .local, so if you are using the default hostname of raspberrypi, enter raspberrypi.local.

The .local syntax is supported on macOS and the latest versions of Windows or Linux. If it doesn’t work for you then you can install additional software locally to add support. On Linux, install Avahi using the command sudo apt-get install avahi-daemon. On Windows, install either Bonjour Print Services for Windows, or iTunes for Windows.

For example, to connect to my Raspberry Pi 400 with a hostname of pi-400 using the default pi user, I enter pi@pi-400.local.

The first time you connect, it will validate the fingerprint to ensure you are connecting to the correct host. Select Continue from this dialog.

Enter your Raspberry Pi’s password when promoted. The default is raspberry, but you should have changed this (really, you should!).

VS Code will then install the relevant tools on the Raspberry Pi and configure the remote SSH connection.

Code!

You will now be all set up and ready to code on your Raspberry Pi. Start by opening a folder or cloning a git repository and away you go coding, debugging and deploying your applications.

In the remote session, not all extensions you have installed locally will be available remotely. Any extensions that change the behavior of VS Code as an application, such as themes or tools for managing cloud resources, will be available.

Things like language packs and other programming tools are not installed in the remote session, so you’ll need to re-install them. When you install these extensions, you’ll see the Install button has changed to Install in SSH:< hostname > to show it’s being installed remotely.

VS Code may seem daunting at first – it’s a powerful tool with a huge range of extensions. The good news is Microsoft has you covered with lots of hands-on, self-guided learning guides on how to use it with different languages and development tools, from using Git version control, to developing web applications. There’s even a guide to learning Python basics with Wonder Woman!

Jim with his arms folded wearing a dark t shirt
Jim Bennett

You remember Jim – his blog Expecting Someone Geekier is well good. You can find him on Twitter @jimbobbennett and on github.

The post Coding on Raspberry Pi remotely with Visual Studio Code appeared first on Raspberry Pi.

Raspberry Pi 1 Model B units brought back to life for charity

Par Ashley Whittaker

When we heard that James Dawson had rescued a load of well-worn Raspberry Pi 1 Model B and Model A computers from eBay, refurbished them, and sold them on, we felt warm and fuzzy knowing that some of our oldest devices would be finding new homes.

 Pi's needing new SD card slots
Raspberry Pis in need of new SD card slots. These are now all repaired but yet to be sold 

But the feels really hit when we learned that James is donating the money from those resales to us for our Learn at Home campaign, where we get Raspberry Pis into the hands of UK young people who need them the most.

We decided to learn a little more about the guy behind this generous idea.

Where do your computer repair skills come from?

I’m a 25-year-old guy from Newcastle Upon Tyne. I’ve always been into computers and started weekend work experience in a computer repair shop, which turned into an apprenticeship and then a full-time job, giving me a basic knowledge of board-level repairs and hardware diagnostics.

Why Raspberry Pi?

Around the time the first Raspberry Pi (the Model B) came out in 2012, the company I worked for took on a large client in their business IT support division that ran Linux based servers. I immediately purchased a Raspberry Pi and set about learning my way around the Linux terminal and picked it up pretty quickly.

Raspberry 1 Model A plus
Looking good for your age there, Model A

What do you do now?

I ended up supporting the aforementioned Linux-based servers for several years before moving on. Seven years later I’m a Senior Linux System Administrator / Platform engineer for a multinational company, and I’m not sure I’d be in this position if it wasn’t for Raspberry Pi! 

How did the idea to refurbish old Raspberry Pi units come about?

This isn’t something I had planned to do, it just happened! I was looking for some Raspberry Pi accessories on eBay one night, when I came across a box of 200+ broken Raspberry Pis. I had to have them and save them from becoming e-waste, but I didn’t have a plan for them, or even know if they were in a fixable state. 

James' haul of Raspberry Pi kit, some boxed, in a big cardboard box
James’ eBay haul

How did you fix them?

Once I found out the condition and performed some diagnostics, I realised that well over half of them were repairable. Using a cheap 3.5″ TFT Raspberry Pi display and a hacky bash script, I created a diagnostic tool that tested the USB ports, Ethernet port, and display output.

applying a solder mask to a pcb
Solder masks are a core component in James’s beauty routine. This photo comes from Part 3 of his five-part project blog.

The technical side of the repairs are detailed in a five-part (so far) blog. Get started on Part 1.

What made you want to donate to the Raspberry Pi Foundation?

I initially decided to see if I could donate the refurbished units to schools or maker spaces, but it turns out donating seven-year-old hardware is harder than it sounds!

Raspberry Pi 1s in a cardboard box

Thankfully, there are still a lot of people out there who are interested in early Raspberry Pi models, so I decided to sell them and donate the money. The Raspberry Pi Foundation, specifically their Learn at Home campaign, stood out to me. 

How well did they sell?

The first batch I repaired sold out in two days. That raised £400, which has already been donated. I hope to raise around £800 in total, and the next batch will be listed for sale soon.

Lots of Pi 1 Model B + packed in tightly in a box
Refurbished Raspberry Pi Model B, ready to ship

Keep up with James’s tech projects on his blog, or follow him on Twitter.

His latest refurbished batch of Raspberry Pi Model A computers is for sale on eBay, as well as a ton of Raspberry Pi Model B units.

Donate to our Learn at Home campaign

Since last summer, we’ve been distributing free Raspberry Pi computers to young people in the UK who don’t have access to a computer at home to do their schoolwork. The £800 that James is raising will allow us to give four disadvantaged young people free Raspberry Pi computer kits and ongoing support so they can continue learning while at home during the pandemic.

Find out how you can donate to our Learn at Home campaign to help solve this urgent issue.

The post Raspberry Pi 1 Model B units brought back to life for charity appeared first on Raspberry Pi.

Check beer stock with Keg Punk on Raspberry Pi

Par Ashley Whittaker

Do you remember the Danger Shed? New Orleans-based Raspberry Pi-powered home brewing monitoring set up in a… shed? Well, Patrick Murphy and his brewing crew are back with a new toy.

How does it work?

What is it?

It’s called Keg Punk – inventory software written in Python, specifically for running on Raspberry Pi and the 7″ Raspberry Pi Touch Display. You mount the touchscreen station in a convenient place and run the program on an embedded Raspberry Pi 4.

keg punk interface
Nice clean interface

Keg Punk is written in Python and is about 2500 lines of code. Since the program is small with a simple interface, it runs on anything from Raspberry Pi Zero to Raspberry Pi 4.

Who needs it?

As a manager at a local craft brewery, Patrick hated not knowing (or not being able to remember) how many kegs of each beer were left in the cellar.

So he started developing a cellar inventory program with the intention of being able to run it within arm’s reach of the beer taps.

Raspberry Pi seven inch touch screen running Keg Punk software
Small enough to sit discreetly next to the beer taps behind the bar

The station needed to have a touchscreen and be tough enough to cope with harsh environments (beer gets EVERYWHERE). Raspberry Pi is the perfect platform for the job as it’s small and easy to connect a touchscreen to.

It can be mounted discreetly close to workstations, so bartenders can quickly see how much stock is left without needing to go down to the cellar.

Keg Punk Raspberry Pi inside the screen casing
Everything fits neatly behind the Raspberry Pi Touch Display

While requirements in a professional setting inspired the idea of Keg Punk, it was developed with the home brewer in mind. The touchscreen station can easily be mounted to a kegerator (a portmanteau of keg and refrigerator) and the tap display can be configured to your setup.

Three installation options

One of the things the Danger Shed team admire most about Raspberry Pi users is their willingness to do a little hands-on tinkering. With that in mind, they launched Keg Punk in three packages, so you can choose an option based on how much of that you’d like to do:

keg punk full kit ready to be shipped
The Taproom Package

The Taproom Package: This is a full plug-in-and-go setup for those who don’t have a Raspberry Pi or who simply do not have time to tinker while also running a bar.

a raspberry pi 4 next to an s d card being held

Keg Punk pre-loaded SD card: Perfect for beer slingers who already have a Raspberry Pi but don’t want to install on their current SD card or deal with the hassle of installation.

Keg Punk software only: If you already have a Raspberry Pi and don’t mind a fair bit of tinkering, you can download the Keg Punk software and manually install.

The post Check beer stock with Keg Punk on Raspberry Pi appeared first on Raspberry Pi.

Visual Studio Code comes to Raspberry Pi

Par Ashley Whittaker

Microsoft’s Visual Studio Code is an excellent C development environment, and now it’s an easy install on Raspberry Pi. Here’s Jim Bennett from Microsoft to show you all how to get VS Code up and running on our tiny computer. Take it away, Jim…

There are a few products in the tech sphere that get me really excited. One of them is Raspberry Pi (obviously), and the other is Visual Studio Code or VS Code. I always hoped that the two would come together one day — and now, to my great pleasure, they have!

VS Code is a free, open source developer text editor originally released for Windows, macOS and x64 Linux. Out of the box it supports generic text editing and git source code control, as well as full web development with JavaScript, TypeScript and Node.js, with debugging, intellisense and all the goodness you’d expect from a full-featured IDE. What makes it super powerful is extensions — bringing a huge range of programming languages, developer tools and other capabilities.

For example my VS Code setup includes a Python extension so I can code and debug in Python, a set of Microsoft Azure extensions so I can manage my cloud services, PlatformIO to allow me to program micro-controllers like Arduino boards coupled with a C++ extension to support coding in C and C++, and even some Docker support. Not a bad setup for a completely free developer tool.

Jim’s Raspberry Pi 400 running VS Code

I’ve been hoping for years VS Code would come to Raspberry Pi, and finally it’s here. As well as supporting Debian Linux on x64, there are now builds for ARM and ARM64 – both of which can run on Raspberry Pi OS (the ARM build on Raspberry Pi OS, the ARM64 on the beta of the 64-bit Raspberry Pi OS). And yes — I am writing this right now on a Raspberry Pi 400 running VS Code!

Why am I so excited about this?

Well, there are a couple of reasons.

Firstly, it brings an exceptional developer tool to Raspberry Pi. There are already some great editors, but nothing of the calibre of VS Code. I can take my $35 computer, plug it into a keyboard and mouse, connect a monitor and a TV and code in a wide range of languages from the same place.

I see kids learning Python at school using one tool, then learning web development in an after-school coding club with a different tool. They can now do both in the same application, reducing the cognitive load – they only have to learn one tool, one debugger, one setup. Combine this with the new Raspberry Pi 400 and you have an all-in-one solution to learning to code, reminiscent of my ZX Spectrum of decades ago, but so much more powerful.

The second reason is to me the most important — it allows kids to share the same development environment as their grown-ups. Imagine the joy of a 10-year-old coding Python using VS Code on their Raspberry Pi plugged into the family TV, then seeing their Mum working from home coding Python in exactly the same tool on her work laptop as part of her job as an AI engineer or data scientist. It also makes it easier when Mum has to inevitably help with unblocking the issues that always come up with learners.

As a young child it was mind-blowing when my Dad brought home a work PC so he could write reports and I could use it to write up my school work – I was using what Dad used at work, making me feel important. I see this with my seven-year-old daughter, seeing her excitement that I use Microsoft Teams for work, the same as she uses for her virtual schooling (she’s even offered to teach me how to use it if I get stuck). To be able to bring that unadulterated joy of using ‘grown-up tools’ to our young learners is priceless.

Installing VS Code

The great news is VS Code is now available as part of the Raspberry Pi OS apt packages. Launch the Raspberry Pi Terminal and run the following commands:

sudo apt update 
sudo apt install code -y

This will download and install VS Code. If you’ve got your hands on a Pico, then you may not even need to do this – VS Code is installed as part of the Pico setup from the Getting Started guide.

After installing VS Code, you can run it from the Programming folder in the Raspberry Pi menu.

Getting started with VS Code

VS Code may seem daunting at first – it’s a powerful tool with a huge range of extensions. The good news is Microsoft has you covered with lots of hands-on, self-guided learning guides on how to use it with different languages and development tools, from using Git version control, to developing web applications — there’s even a guide to learning Python basics with Wonder Woman.

Go grab it and happy coding!

Jim with his arms folded wearing a dark t shirt
There he is – that’s the real life Jim!

Brilliant Jim Bennett shares loads of Raspberry Pi builds and tutorials over on Expecting Someone Geekier and tweets @jimbobbennett. He also works in Developer Relations at Microsoft. You can learn pretty much everything there is to know about him on github.

The post Visual Studio Code comes to Raspberry Pi appeared first on Raspberry Pi.

Machine learning and depth estimation using Raspberry Pi

Par David Plowman

One of our engineers, David Plowman, describes machine learning and shares news of a Raspberry Pi depth estimation challenge run by ETH Zürich (Swiss Federal Institute of Technology).

Spoiler alert – it’s all happening virtually, so you can definitely make the trip and attend, or maybe even enter yourself.

What is Machine Learning?

Machine Learning (ML) and Artificial Intelligence (AI) are some of the top engineering-related buzzwords of the moment, and foremost among current ML paradigms is probably the Artificial Neural Network (ANN).

They involve millions of tiny calculations, merged together in a giant biologically inspired network – hence the name. These networks typically have millions of parameters that control each calculation, and they must be optimised for every different task at hand.

This process of optimising the parameters so that a given set of inputs correctly produces a known set of outputs is known as training, and is what gives rise to the sense that the network is “learning”.

A popular type of ANN used for processing images is the Convolutional Neural Network. Many small calculations are performed on groups of input pixels to produce each output pixel
A popular type of ANN used for processing images is the Convolutional Neural Network. Many small calculations are performed on groups of input pixels to produce each output pixel

Machine Learning frameworks

A number of well known companies produce free ML frameworks that you can download and use on your own computer. The network training procedure runs best on machines with powerful CPUs and GPUs, but even using one of these pre-trained networks (known as inference) can be quite expensive.

One of the most popular frameworks is Google’s TensorFlow (TF), and since this is rather resource intensive, they also produce a cut-down version optimised for less powerful platforms. This is TensorFlow Lite (TFLite), which can be run effectively on Raspberry Pi.

Depth estimation

ANNs have proven very adept at a wide variety of image processing tasks, most notably object classification and detection, but also depth estimation. This is the process of taking one or more images and working out how far away every part of the scene is from the camera, producing a depth map.

Here’s an example:

Depth estimation example using a truck

The image on the right shows, by the brightness of each pixel, how far away the objects in the original (left-hand) image are from the camera (darker = nearer).

We distinguish between stereo depth estimation, which starts with a stereo pair of images (taken from marginally different viewpoints; here, parallax can be used to inform the algorithm), and monocular depth estimation, working from just a single image.

The applications of such techniques should be clear, ranging from robots that need to understand and navigate their environments, to the fake bokeh effects beloved of many modern smartphone cameras.

Depth Estimation Challenge

C V P R conference logo with dark blue background and the edge of the earth covered in scattered orange lights connected by white lines

We were very interested then to learn that, as part of the CVPR (Computer Vision and Pattern Recognition) 2021 conference, Andrey Ignatov and Radu Timofte of ETH Zürich were planning to run a Monocular Depth Estimation Challenge. They are specifically targeting the Raspberry Pi 4 platform running TFLite, and we are delighted to support this effort.

For more information, or indeed if any technically minded readers are interested in entering the challenge, please visit:

The conference and workshops are all taking place virtually in June, and we’ll be sure to update our blog with some of the results and models produced for Raspberry Pi 4 by the competing teams. We wish them all good luck!

The post Machine learning and depth estimation using Raspberry Pi appeared first on Raspberry Pi.

Keeping secrets and writing about Raspberry silicon

Par Alasdair Allan

In the latest issue of The MagPi Magazine, Alasdair Allan shares the secrets he had to keep while working behind the scenes to get Raspberry Pi’s RP2040 chip out into the world.

Alasdair Allen holding a Pico board
BEST friends

There is a new thing in the world, and I had a ringside seat for its creation. 

For me, it started just over a year ago with a phone call from Eben Upton. One week later I was sitting in a meeting room at Raspberry Pi Towers in Cambridge, my head tilted to one side while Eben scribbled on a whiteboard and waved his hands around. 

Eben had just told me that Raspberry Pi was designing its own silicon, and he was talking about the chip that would eventually be known as RP2040. Eben started out by drawing the bus fabric, which isn’t where you normally start when you talk about a new chip, but it turned out RP2040 was a rather unusual chip.

“I gradually drifted sideways into playing with the toys.”

I get bored easily. I started my career doing research into the high-energy physics of collision shocks in the accretion discs surrounding white dwarf stars, but I gradually drifted sideways into playing with the toys.

After spending some time working with agent-based systems to solve scheduling problems for robotic telescopes, I became interested in machine learning and what later became known as ‘big data’.

Meet Raspberry Pi Pico

From there, I spent time investigating the ‘data exhaust’ and data living outside the cloud in embedded and distributed devices, and as a consequence did a lot of work on mobile systems. Which led me to do some of the thinking, and work, on what’s now known as the Internet of Things. Which meant I had recently spent a lot of time writing and talking about embedded hardware. 

Eben was looking for someone to make sure the documentation around Raspberry Pi Pico, and RP2040 silicon itself, was going to measure up. I took the job.

Rumour mill

I had spent the previous six months benchmarking Machine Learning (ML) inferencing on embedded hardware, and a lot of time writing and talking about the trendy new world of Tiny ML.

What is a microcontroller?

The rumours of what I was going to be doing for Raspberry Pi started flying on social media almost immediately. The somewhat pervasive idea that I was there to help support putting a Coral Edge TPU onto Raspberry Pi 5 was a particularly good wheeze. 

Instead, I was going to spend the next year metaphorically locked in a room building a documentation toolchain around – and of course writing about – a totally secret product.

Screenshot of our Getting Started with Raspberry Pi Pico landing page
Alasdair’s work turned into this

I couldn’t talk about it in public, and I talk about things in public a lot. Only the fact that almost everyone else spent the next year locked indoors as well kept too many questions from being asked. I didn’t have to tell conference organisers that I couldn’t talk about what I was doing, because there weren’t any conferences to organise.

I’m rather pleased with what I’ve done with my first year at Raspberry Pi, and of course with how my work on RP2040 and Raspberry Pi Pico turned out.

Taken from our Getting Started with Raspberry Pi Pico page

Much like a Raspberry Pi is an accessible computer that gives you everything you need to learn to write a program, RP2040 is an accessible chip with everything you need to learn to build a product. It’s going to bring a big change to the microcontroller market, and I’m really rather pleased I got a ringside seat to its creation.

The post Keeping secrets and writing about Raspberry silicon appeared first on Raspberry Pi.

What does equity-focused teaching mean in computer science education?

Par Sue Sentance

Today, I discuss the second research seminar in our series of six free online research seminars focused on diversity and inclusion in computing education, where we host researchers from the UK and USA together with the Royal Academy of Engineering. By diversity, we mean any dimension that can be used to differentiate groups and people from one another. This might be, for example, age, gender, socio-economic status, disability, ethnicity, religion, nationality, or sexuality. The aim of inclusion is to embrace all people irrespective of difference. 

In this seminar, we were delighted to hear from Prof Tia Madkins (University of Texas at Austin), Dr Nicol R. Howard (University of Redlands), and Shomari Jones (Bellevue School District) (find their bios here), who talked to us about culturally responsive pedagogy and equity-focused teaching in K-12 Computer Science.

  • Tia Madkins
    Prof Tia Madkins
  • Nicol Howard
    Dr Nicol R. Howard
  • Shomari Jones
    Shomari Jones

Equity-focused computer science teaching

Tia began the seminar with an audience-engaging task: she asked all participants to share their own definition of equity in the seminar chat. Amongst their many suggestions were “giving everybody the same opportunity”, “equal opportunity to access high-quality education”, and “everyone has access to the same resources”. I found Shomari’s own definition of equity very powerful: 

“Equity is the fair treatment, access, opportunity, and advancement of all people, while at the same time striving to identify and eliminate barriers that have prevented the full participation of some groups. Improving equity involves increasing justice and fairness within the procedures and processes of institutions or systems, as well as the distribution of resources. Tackling equity requires an understanding of the root cause of outcome disparity within our society.”

Shomari Jones

This definition is drawn directly from the young people Shomari works with, and it goes beyond access and opportunity to the notion of increasing justice and fairness and addressing the causes of outcome disparity. Justice was a theme throughout the seminar, with all speakers referring to the way that their work looks at equity in computer science education through a justice-oriented lens.

Removing deficit thinking

Using a justice-oriented approach means that learners should be encouraged to use their computer science knowledge to make a difference in areas that are important to them. It means that just having access to a computer science education is not sufficient for equity.

Tia Madkins presents a slide: "A justice-oriented approach to computer science teaching empowers students to use CS knowledge for transformation, moves beyond access and achievement frames, and is an asset- or strengths-based approach centering students and families"

Tia spoke about the need to reject “deficit thinking” (i.e. focusing on what learners lack) and instead focus on learners’ strengths or assets and how they bring these to the school classroom. For researchers and teachers to do this, we need to be aware of our own mindset and perspective, to think about what we value about ethnic and racial identities, and to be willing to reflect and take feedback.

Activities to support computer science teaching

Nicol talked about some of the ways of designing computing lessons to be equity-focused. She highlighted the benefits of pair programming and other peer pedagogies, where students teach and learn from each other through feedback and sharing ideas/completed work. She suggested using a variety of different programs and environments, to ensure a range of different pathways to understanding. Teachers and schools can aim to base teaching around tools that are open and accessible and, where possible, available in many languages. If the software environment and tasks are accessible, they open the doors of opportunity to enable students to move on to more advanced materials. To demonstrate to learners that computer science is applicable across domains, the topic can also be introduced in the context of mathematics and other subjects.

Nicol Howard presents a slide: "Considerations for equity-focused computer science teaching include your beliefs (and your students' beliefs) and how they impact CS classrooms; tiered activities and pair programming; self-expressions versus CS preparation; equity-focused lens"

Learners can benefit from learning computer science regardless of whether they want to become a computer scientist. Computing offers them skills that they can use for self-expression or to be creative in other areas of their life. They can use their knowledge for a specific purpose and to become more autonomous, particularly if their teacher does not have any deficit thinking. In addition, culturally relevant teaching in the classroom demonstrates a teacher’s deliberate and explicit acknowledgment that they value all students in their classroom and expect students to excel.

Engaging family and community

Shomari talked about the importance of working with parents and families of ethnically diverse students in order to hear their voices and learn from their experiences.

Shomari Jones presents a slide: “Parents without backgrounds and insights into the changing landscape of technology struggle to negotiate what roles they can play, such as how to work together in computing activities or how to find learning opportunities for their children.”

He described how the absence of a background in technology of parents and carers can drastically impact the experiences of young people.

“Parents without backgrounds and insights into the changing landscape of technology struggle to negotiate what roles they can play, such as how to work together in computing activities or how to find learning opportunities for their children.”

Betsy DiSalvo, Cecili Reid, and Parisa Khanipour Roshan. 2014

Shomari drew on an example from the Pacific Northwest in the US, a region with many successful technology companies. In this location, young people from wealthy white and Asian communities can engage fully in informal learning of computer science and can have aspirations to enter technology-related fields, whereas amongst the Black and Latino communities, there are significant barriers to any form of engagement with technology. This already existent inequity has been enhanced by the coronavirus pandemic: once so much of education moved online, it became widely apparent that many families had never owned, or even used, a computer. Shomari highlighted the importance of working with pre-service teachers to support them in understanding the necessity of family and community engagement.

Building classroom communities

Building a classroom community starts by fostering and maintaining relationships with students, families, and their communities. Our speakers emphasised how important it is to understand the lives of learners and their situations. Through this understanding, learning experiences can be designed that connect with the learners’ lived experiences and cultural practices. In addition, by tapping into what matters most to learners, teachers can inspire them to be change agents in their communities. Tia gave the example of learning to code or learning to build an app, which provides learners with practical tools they can use for projects they care about, and with skills to create artefacts that challenge and document injustices they see happening in their communities.

Find out more

If you want to learn more about this topic, a great place to start is the recent paper Tia and Nicol have co-authored that lays out more detail on the work described in the seminar: Engaging Equity Pedagogies in Computer Science Learning Environments, by Tia C. Madkins, Nicol R. Howard and Natalie Freed, 2020.

You can access the presentation slides via our seminars page.

Join our next free seminar

In our next seminar on Tuesday 2 March at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PDT / 18:00–19:30 CEST, we’ll welcome Jakita O. Thomas (Auburn University), who is going to talk to us about Designing STEM Learning Environments to Support Computational Algorithmic Thinking and Black Girls: A Possibility Model for Changing Hegemonic Narratives and Disrupting STEM Neoliberal Projects. To join this free online seminar, simply sign up by following the link at the button.

If you’ve already signed up previously, there’s no need to do so again — we’ll make sure you receive access to the online seminar session.

The post What does equity-focused teaching mean in computer science education? appeared first on Raspberry Pi.

The journey to Raspberry Silicon

Par Liam Fraser

When I first joined Raspberry Pi as a software engineer four and a half years ago, I didn’t know anything about chip design. I thought it was magic. This blog post looks at the journey to Raspberry Silicon and the design process of RP2040.

RP2040 on a Raspberry Pi Pico
RP2040 – the heart of Raspberry Pi Pico

RP2040 has been in development since summer 2017. Chips are extremely complicated to design. In particular, the first chip you design requires you to design several fundamental components, which you can then reuse on future chips. The engineering effort was also diverted at some points in the project (for example to focus on the Raspberry Pi 4 launch).

Once the chip architecture is specified, the next stage of the project is the design and implementation, where hardware is described using a hardware description language such as Verilog. Verilog has been around since 1984 and, along with VHDL, has been used to design most chips in existence today. So what does Verilog look like, and how does it compare to writing software?

Suppose we have a C program that implements two wrapping counters:

void count_forever(void) {
    uint8_t i = 0;
    uint8_t j = 0;
    while (1) {
        i += 1;
        j += 1;
    }
}

This C program will execute sequentially line by line, and the processor won’t be able to do anything else (unless it is interrupted) while running this code. Let’s compare this with a Verilog implementation of the same counter:

module counter (
    input wire clk,
    input wire rst_n,
    output reg [7:0] i,
    output reg [7:0] j
);

always @ (posedge clk or negedge rst_n) begin
    if (~rst_n) begin
        // Counter is in reset so hold counter at 0
        i <= 8’d0;
        j <= 8’d0;
    end else begin
        i <= i + 8’d1;
        j <= j + 8’d1;
    end
end

endmodule

Verilog statements are executed in parallel on every clock cycle, so both i and j are updated at exactly the same time, whereas the C program increments i first, followed by j. Expanding on this idea, you can think of a chip as thousands of small Verilog modules like this, all executing in parallel.

A chip designer has several tools available to them to test the design. Testing/verification is the most important part of a chip design project: if a feature hasn’t been tested, then it probably doesn’t work. Two methods of testing used on RP2040 are simulators and FPGAs. 

A simulator lets you simulate the entire chip design, and also some additional components. In RP2040’s case, we simulated RP2040 and an external flash chip, allowing us to run code from SPI flash in the simulator. That is the beauty of hardware design: you can design some hardware, then write some C code to test it, and then watch it all run cycle by cycle in the simulator.

“ell” from the phrase “Hello World” from core0 of RP2040 in a simulator

The downside to simulators is that they are very slow. It can take several hours to simulate just one second of a chip. Simulation time can be reduced by testing blocks of hardware in isolation from the rest of the chip, but even then it is still slow. This is where FPGAs come in…

FPGAs (Field Programmable Gate Arrays) are chips that have reconfigurable logic, and can emulate the digital parts of a chip, allowing most of the logic in the chip to be tested. 

FPGAs can’t emulate the analogue parts of a design, such as the resistors that are built into RP2040’s USB PHY. However, this can be approximated by using external hardware to provide analogue functionality. FPGAs often can’t run a design at full speed. In RP2040’s case, the FPGA was able to run at 48MHz (compared to 133MHz for the fully fledged chip). This is still fast enough to test everything we wanted and also develop software on.

FPGAs also have debug logic built into them. This allows the hardware designer to probe signals in the FPGA, and view them in a waveform viewer similar to the simulator above, although visibility is limited compared to the simulator.

Graham’s tidy FPGA
Graham’s less tidy FPGA
Oh dear

The RP2040 bootrom was developed on FPGA, allowing us to test the USB boot mode, as well executing code from SPI flash. In the image above, the SD card slot on the FPGA is wired up to SPI flash using an SD card-shaped flash board designed by Luke Wren.

USB testing on FPGA

In parallel to Verilog development, the implementation team is busy making sure that the Verilog we write can actually be made into a real chip. Synthesis takes a Verilog description of the chip and converts the logic described into logic cells defined by your library choice. RP2040 is manufactured by TSMC, and we used their standard cell library.

RP2040 silicon in a DIL package!

Chip manufacturing isn’t perfect. So design for test (DFT) logic is inserted, allowing the logic in RP2040 to be tested during production to make sure there are no manufacturing defects (short or open circuit connections, for example). Chips that fail this production test are thrown away (this is a tiny percentage – the yield for RP2040 is particularly high due to the small die size).

After synthesis, the resulting netlist goes through a layout phase where the standard cells are physically placed and interconnect wires are routed. This is a synchronous design so clock trees are inserted, and timing is checked and fixed to make sure the design meets the clock speeds that we want. Once several design rules are checked, the layout can be exported to GDSII format, suitable for export to TSMC for manufacture.

RP2040 chips ready for a bring up board

(In reality, the process of synthesis, layout, and DFT insertion is extremely complicated and takes several months to get right, so the description here is just a highly abbreviated overview of the entire process.)

Once silicon wafers are manufactured at TSMC they need to be put into a package. After that, the first chips are sent to Pi Towers for bring-up!

The RP2040 bring-up board

A bring-up board typically has a socket (in the centre) so you can test several chips in a single board. It also separates each power supply on the chip, so you can limit the current on first power-up to check there are no shorts. You don’t want the magic smoke to escape!

The USB boot mode working straight out of the box on a bring-up board!

Once the initial bring-up was done, RP2040 was put through its paces in the lab. Characterising behaviour, seeing how it performs at temperature and voltage extremes.

Once the initial batch of RP2040s are signed off we give the signal for mass production, ready for them to be put onto Pico boards that you have in your hands today.

82K RP2040s ready for shipment to Sony

A chip is useless without detailed documentation. While RP2040 was making its way to mass production, we spent several months writing the SDK and excellent documentation you have available to you today.

The post The journey to Raspberry Silicon appeared first on Raspberry Pi.

Creative projects for young digital makers

Par Philip Colligan

With so many people all over the world still living in various levels of lockdown, we’ve been working hard to provide free, creative project resources for you to keep young digital makers occupied, learning, and most importantly having fun.

Two siblings sit on a sofa looking at a laptop

As a dad of two, I know how useful it is to have resources and project ideas for things that we can do together, or that the kids can crack on with independently. As we head into the weekend, I thought I’d share a few ideas for where to get started. 

Coding and digital making projects

We offer hundreds of self-guided projects for learning to create with code using tools like Scratch, Python, and more. The projects can be completed online on any computer, they are tailored for different levels of experience, and they include step-by-step guidance that quickly leads to confident, independent young digital makers.

animation of butterflies fluttering around a forest clearing
You can code a butterfly garden with one of our ‘Look after yourself’ projects!

We recently launched a new set of beginner Scratch projects on the theme of ‘Look after yourself’, which include activities designed to help young people take care of their own wellbeing while getting creative with code. They are brilliant.

“I am so excited by the [‘Look after yourself’] projects on offer. It couldn’t be more perfect for everything we are navigating right now.”

– teacher in Scotland

We offer lots of project ideas for the more advanced learners too, including a new set of Python machine learning projects.

With spring in the air here in Cambridge, UK, my kids and I are planning on building a new Raspberry Pi–powered nature camera this weekend. What will you make? 

Send a message to astronauts in space

If Earth is getting you down, then how about creating code that will be sent to the International Space Station?

This is where your kids’ code could run aboard the ISS!

As part of Astro Pi Mission Zero, young people up to age 14 can write a Python program to send their own personal message to the astronauts aboard the ISS. Mission Zero takes about an hour to complete online following a step-by-step guide. It’s a fantastic activity for anyone looking to write Python code for the first time!

Make a cool project 

We know that motivation matters. Young digital makers often need a goal to work towards, and that’s where Coolest Projects comes in. It’s the world-leading technology showcase where young digital makers show the world what they’ve created and inspire each other.

Coolest Projects is open to young people up to the age of 18, all over the world, with any level of experience or skills. Young people can register their project ideas now and then create their project so that they can share it with the world on our online gallery. 

It’s a brilliant way to motivate your young digital makers to come up with an idea and make it real. If you’re looking for inspiration, then check out the brilliant projects from last year.

Happy digital making!

I hope that these resources and project ideas inspire you and your kids to get creative with technology, whether you’re in lockdown or not. Stay safe and be kind to yourself and each other. We’ll get through this.

The post Creative projects for young digital makers appeared first on Raspberry Pi.

Code a Light Cycle arcade minigame | Wireframe #47

Par Ryan Lambie

Speed around an arena, avoiding walls and deadly trails in this Light Cycle minigame. Mark Vanstone has the code.

At the beginning of the 1980s, Disney made plans for an entirely new kind of animated movie that used cutting-edge computer graphics. The resulting film was 1982’s TRON, and it inevitably sparked one of the earliest tie-in arcade machines.

Battle against AI enemies in the original arcade classic.

The game featured several minigames, including one based on the Light Cycle section of the movie, where players speed around an arena on high-tech motorbikes, which leave a deadly trail of light in their wake. If competitors hit any walls or cross the path of any trails, then it’s game over.

Players progress through the twelve levels which were all named after programming languages. In the Light Cycle game, the players compete against AI players who drive yellow Light Cycles around the arena. As the levels progress, more AI Players are added.

The TRON game, distributed by Bally Midway, was well-received in arcades, and even won Electronic Games Magazine’s (presumably) coveted Coin-operated Game of the Year gong.

Although the arcade game wasn’t ported to home computers at the time, several similar games – and outright clones – emerged, such as the unsubtly named Light Cycle for the BBC Micro, Oric, and ZX Spectrum.

The Light Cycle minigame is essentially a variation on Snake, with the player leaving a trail behind them as they move around the screen. There are various ways to code this with Pygame Zero.

 

Our homage to the TRON Light Cycle classic arcade game.

 

In this sample, we’ll focus on the movement of the player Light Cycle and creating the trails that are left behind as it moves around the screen. We could use line drawing functions for the trail behind the bike, or go for a system like Snake, where blocks are added to the trail as the player moves.

In this example, though, we’re going to use a two-dimensional list as a matrix of positions on the screen. This means that wherever the player moves on the screen, we can set the position as visited or check to see if it’s been visited before and, if so, trigger an end-game event.

For the main draw() function, we first blit our background image which is the cross-hatched arena, then we iterate through our two-dimensional list of screen positions (each 10 pixels square) displaying a square anywhere the Cycle has been. The Cycle is then drawn and we can add a display of the score.

The update() function contains code to move the Cycle and check for collisions. We use a list of directions in degrees to control the angle the player is pointing, and another list of x and y increments for each direction. Each update we add x and y coordinates to the Cycle actor to move it in the direction that it’s pointing multiplied by our speed variable.

We have an on_key_down() function defined to handle changing the direction of the Cycle actor with the arrow keys. We need to wait a while before checking for collisions on the current position, as the Cycle won’t have moved away for several updates, so each screen position in the matrix is actually a counter of how many updates it’s been there for.

We can then test to see if 15 updates have happened before testing the square for collisions, which gives our Cycle enough time to clear the area. If we do detect a collision, then we can start the game-end sequence.

We set the gamestate variable to 1, which then means the update() function uses that variable as a counter to run through the frames of animation for the Cycle’s explosion. Once it reaches the end of the sequence, the game stops.

We have a key press defined (the SPACE bar) in the on_key_down() function to call our init() function, which will not only set up variables when the game starts but sets things back to their starting state.

Here’s Mark’s code for a TRON-style Light Cycle minigame. To get it working on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

So that’s the fundamentals of the player Light Cycle movement and collision checking. To make it more like the original arcade game, why not try experimenting with the code and adding a few computer-controlled rivals?

Get your copy of Wireframe issue 47

You can read more features like this one in Wireframe issue 47, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 47 for free in PDF format.

The post Code a Light Cycle arcade minigame | Wireframe #47 appeared first on Raspberry Pi.

Raspberry Pi Pico balloon tracker

Par Ashley Whittaker

Dave Akerman of High Altitude Ballooning came up with a stratospherically cool application for Raspberry Pi Pico. In this guest blog, he shows you how to build and code a weather balloon tracker.

Balloon tracking

My main hobby is flying weather balloons, using GPS/radio trackers to relay their position to the ground, so they can be tracked and hopefully recovered. Trackers minimally consist of a GPS receiver feeding the current position to a small computer, which in turn controls a radio transmitter to send that position to the ground. That position is then fed to a live map to aid chasing and recovering the flight.

system-1024x813-500x396
How it all works

This essential role of the tracker computer is thus a simple one, and those making their own trackers can choose from a variety of microcontrollers chips and boards, for example Arduino boards, PIC microcontrollers or the BBC Microbit. Anything with a modest amount of code memory, data memory, processor power and I/O (serial, SPI etc depending on choice of GPS and radio) will do. A popular choice is Raspberry Pi, which, whilst a sledgehammer to crack a nut for tracking, does make it easy to add a camera.

Raspberry Pi Pico

When I see a new type of processor board, I feel duty bound to make it into a balloon tracker, so when I was asked to help test the new Raspberry Pi Pico, doing so was my first thought. It has plenty of I/O – SPI ports, I2C and serial all available – plus a unique ability (not that I need it for now) to add extra peripherals using the programmable PIO modules, so there was no doubt that it would be very usable. Also, having much more memory than typical microcontrollers, it offers the ability to add functions that would normally need a full Raspberry Pi board – for example on-board landing prediction. More on that later.

Tracker components

So a basic tracker has a GPS receiver and radio transmitter. To connect these to the Raspberry Pi Pico, I used a prototyping board where I mounted a UBlox GPS receiver, LoRa radio transmitter, and sockets for the Pico itself.

I don’t use breadboards as they are prone to intermittent connections that then waste programming time chasing a “bug” that’s actually a hardware problem. Besides, trackers need to be robust so I would need to solder one together eventually anyway.

Pico top, GPS bottom-left; LoRa bottom-right

The particular UBlox GPS module I had handy only has a serial port brought out, so I couldn’t use I2C. No matter because, unlike most Arduino boards, the Raspberry Pi Pico isn’t limited to a single serial port.

The LoRa module connects via SPI and a single GPIO pin which the module uses to send its status (e.g. packet sent – ready to send next packet) to the Raspberry Pi Pico.

Finally, with the tracker working, I added an I2C environmental sensor to the board via a pin header, so the sensor can be placed in free air outside the tracker.

Development setup

I decided to use C for my tracker rather than Python, for a variety of reasons. The main one is that I have plenty of existing C tracker code to work from, for Arduino and Raspberry Pi, but not so much Python. Secondly, I figured that most of the testers would be using Python so there might be more of a need to test the C toolchain.

The easiest route to getting the C/C++ toolchain working is to install on a Raspberry Pi 4. I couldn’t quite get the VSCode integration working (finger trouble I think) but anyway I’m quite happy to code with an editor and separate build window. So what I ended up with was Notepadd++ on my Windows PC to edit the code, with the source on a Raspberry Pi 4. I then had an ssh window open to run the compile/link steps, and a separate one running the debugger. The debugger downloads the binary to the Raspberry Pi Pico via the latter’s debug port.

For regular debug output from the program I connected a Raspberry Pi Pico serial port to an FTDI USB Serial TTL adapter connected back to my PC – see the image below.

At some point I’ll revisit this setup. First, it’s now possible to printf to a virtual USB serial port, so that frees up that Raspberry Pi Pico serial port. Secondly, I need to get that VSCode integration working.

Tracker code

My Raspberry Pi and Arduino tracker programs work slightly differently. On the Raspberry Pi, to separate the code for the different functions (GPS, radio, sensors etc) I use a separate thread for each. That allows for example a new packet to be sent to the radio transmitter without delay, even if a slow operation is running concurrently elsewhere.

On the Arduino, with no threads available, the code is still split into separate modules but each one is coded to run quickly without waiting in a loop for a peripheral to respond. For example some temperature sensors can take a second or so to take a measurement, and it’s vital not to sit in a loop waiting for the result.

The C toolchain for Raspberry Pi Pico doesn’t, by default, support threaded code unfortunately. Rather than rebuild it with support added, I opted for the approach I use with Arduino. So the main code starts with initialising each module individually, and then sits in a tight loop calling each module once per loop. It’s then up to each module to return control swiftly so that the loop keeps running quickly and no module is kept waiting for long.

Code modules

The GPS code uses a serial port to receive NMEA data from the GPS. NMEA is the standard ASCII protocol used by pretty much every GPS module that exists, and includes the current date, time, latitude, longitude, altitude and other data. All we need to do is confirm that the data is valid, then read and store these key values. The other important function is to ensure that the GPS module is in the correct “flight mode” so that it works at high altitude – without this then it will stop providing new positions about 18km altitude.

View NMEA Data Log

The LoRa radio code checks to see when the module is not transmitting, then builds a new telemetry message containing the above GPS data plus the name of the balloon, any other sensor data, and the landing prediction (see later).

This message is passed to the LoRa chip via SPI, then the chip switches on its radio and modulates the radio signal with the telemetry data. Once the message has been sent then the chip switches on its DIO0 output which is connected to the Raspberry Pi Pico so it knows when it can send another message.

All messages are received on the ground (in this case by a Pi LoRa receiver) and then uploaded to an internet database that in turn drives a live Google map (see image below).

Sensors

Usefully for balloon trackers, the Raspberry Pi Pico can be powered directly from battery via an on-board buck-boost converter.

The input voltage connects through a potential divider to an analog sense input (ADC3) to allow for easy measurement of the battery voltage. Note that the ADC reference voltage is the 3.3V rail, which is noisy especially when used to power external devices such as the GPS and LoRa both of which have rather spiky power consumption requirements, so the code averages out many measurements.

An alternative would be to add a precise reference voltage to the ADC but I went for the zero cost software option.

The board temperature can also be measured, this time using ADC4. That’s less useful though for a tracker than an external temperature measurement, so I added a BME280 device for that. The Raspberry Pi Pico samples include code for the BME connected via SPI, but I chose I2C so I needed to replace the SPI calls with I2C calls. Pretty easy. The BME280 returns pressure – probably the most interesting environmental measurement for a balloon tracker – and humidity too.

Landing prediction

So far, everything I’ve done could also be done on a basic AVR chip e.g. the Arduino Mini Pro, with some spare room. However, one very useful extra is to add a prediction of the landing point.

We use online flight prediction prior to launch, to determine roughly where the balloon will land (within a few miles) so we know it’s safe to launch without landing near a city for example. This uses a global wind prediction database plus some flight parameters (e.g. ascent rate and burst altitude) to predict the path of the balloon from launch to landing. It can be very accurate if those parameters are followed through on the flight itself.

Of course the actual flight never quite follows the plan – for example the launch might be later than planned, and in changing wind conditions that itself can move the landing point by miles. So it’s useful to have a live prediction during that flight, and indeed we have that, using the same wind database.

However, since it’s online, and 3G/4G can be patchy when chasing a balloon, it’s useful to have an independent landing prediction. This can be done in the tracker itself, by storing the wind speed and direction (deduced from GPS positions) on the way up, measuring the descent rate after burst, applying that to an atmospheric density model to plot the future descent rate to the ground, and then calculating the effect of the wind during descent and finally producing a landing position.

Typical Arduino boards don’t have enough memory to store the measured wind data, but the Raspberry Pi Pico has more than enough. I ported my existing code which:

  1. During ascent, it splits the vertical range into 100 metres sections, into which it stores the latitude and longitude deltas as degrees per second.
  2. Every few seconds, it runs a prediction of the landing position based on the current position, the data in that array, and an estimated descent profile that uses a simple atmospheric model plus default values for payload weight and parachute effectiveness.
  3. During descent, the parachute effectiveness is measured, and the actual figure is used in the above calculation in (2).
  4. Calculates the time it will spend within each 100m section of air, then multiplies that by the stored wind speed to calculate the horizontal distance and direction it is expected to travel in that section.
  5. Adds all those sectional movements together, adds those to the current position, and produces the landing prediction.
  6. Sends that position down to the ground with the rest of the telemetry.

Phew. Now we know pretty much everything about how balloon trackers work. Thanks Dave! Also, if you want to go on your own near-space flight, check out High Altitude Ballooning.

The post Raspberry Pi Pico balloon tracker appeared first on Raspberry Pi.

How to add a reset button to your Raspberry Pi Pico

Par Alasdair Allan

We’ve tried to make it as easy as possible for you to load your code onto your new Raspberry Pi Pico: press and hold the BOOTSEL button, plug your Pico into your computer, and it’ll mount as a mass storage volume. Then just drag and drop a UF2 file onto the board.

However, not everybody is keen to keep unplugging their micro USB cable every time they want to upload a UF2 onto the board. Don’t worry — there’s more than one way around that problem.

Raspberry Pi Pico with a reset button wired to the GND and RUN pins

Firstly, if you’re developing in MicroPython there isn’t any real need to unplug and replug Pico to write code. The only time you’ll need to do it is the initial upload of the MicroPython firmware, which comes as a UF2. From there on in, you’re talking to the board via the REPL and a serial connection, either in Thonny or some other editor.

However, if you’re developing using our C SDK, then to upload new code to your Pico you have to upload a new UF2. This means you’ll need to unplug and replug the board to put Pico into BOOTSEL mode each time you make a change in your code and want to test it.

No more unplugging with SWD?

The best way around this is to use SWD mode (see Chapter 5 of our C/C++ Getting Started book) to upload code using the debug port, instead of using mass storage (BOOTSEL) mode.

A Raspberry Pi 4 and Raspberry Pi Pico with UART and SWD ports connected together

This gets you debugger support, which is invaluable while developing, and involves adding just three more wires. Afterwards, you’ll never have to unplug your Pico again.

Keep on dragging and dropping

But if you want to stick with uploading by drag-and-drop, adding a reset button to your Raspberry Pi Pico is pretty easy.

Raspberry Pi Pico with a reset button wired to the GND and RUN pins

All you need to do is to wire the GND and RUN pins together and add an extra momentary contact button to your breadboard. Pushing the button will reset the board.

Then, instead of unplugging and replugging the USB cable when you want to load code onto Pico, you push and hold the RESET button, push the BOOTSEL button, release the RESET button, then release the BOOTSEL button.

Entering BOOTSEL mode without unplugging your Pico

If your board is in BOOTSEL mode and you want to start code you’ve already loaded running again, all you have to do now is briefly push the RESET button.

Leaving BOOTSEL mode without unplugging your Pico.

We’ve see some people use the 3V3_EN pin instead of the RUN pin. While it’ll work in a pinch, the problem with disabling 3.3V is that GPIOs that are driven from powered external devices will leak like crazy while 3.3V is disabled. There is even the possibility of damage to the chip. So it’s much better to use the RUN pin to make a reset button than the 3V3_EN pin.

What about the other button?

As an aside, if you want to break out the BOOTSEL button as well — perhaps you’re intending to bury your Pico inside an enclosure — you can use TP6 (that is, Test Point 6) on the rear of the board to do so. See Chapter 2 of the Pico Datasheet for details.

Where to find more help and information

Support for developing for Pico can be found on the Raspberry Pi forums. There is also an (unofficial) Discord server where a lot of people active in the new community seem to be hanging out. Feedback on the documentation should be posted as an issue to the pico-feedback repository on GitHub, or directly to the relevant repository it concerns.

All of the documentation, along with lots of other help and links, can be found on the same Getting Started page. If you lose track of where that is in the future, you can always find it from your Pico: to access the page, just press and hold the BOOTSEL button on your Pico, plug it into your laptop or Raspberry Pi, then release the button. Go ahead and open the RPI-RP2 volume, and then click on the INDEX.HTM file.

That will always take you to the Getting Started page.

The post How to add a reset button to your Raspberry Pi Pico appeared first on Raspberry Pi.

Idea registration is open for Coolest Project 2021!

Par Helen Drury

It’s official: idea registration is finally open for Coolest Project 2021!

Our Coolest Projects online showcase brings together a worldwide community of young people who make things with technology. Everyone up to age 18, wherever they are in the world, can register for Coolest Projects to become part of this community with their own tech creation! We welcome all ideas, all experience levels, and all kinds of projects.

So let all the young people in your family, school, or coding club know, because Coolest Projects is their chance to be part of something amazing this year!

Taking part is free, and projects will be displayed in the Coolest Projects online gallery for people all across the globe to see! And getting involved is super easy: young creators can start by registering their idea for a project now, leaving them plenty of time — until May — to build the project at home.

To celebrate the passion, effort, and creativity of all the tech creators, we will host a grand live-streamed finale event in June, where our fabulous, world-renowned judges will pick their favourites from among all the projects!

Last year, young tech creators from 39 countries took part in the Coolest Projects online showcase. This year, we hope young people from even more places will share their tech creations with the world!

Skill-building, fun & community

Coolest Projects is a powerful motivator for young people to develop skills in:

  • Idea generation
  • Project design and planning
  • Coding and technology
  • User testing and iteration
  • Presentation

…and they will have lots of fun, be inspired by their peers, and feel like they are part of a truly international community.

  • A Coolest Projects participant
  • A boy working on a Raspberry Pi robot buggy

Let their imaginations run free! 

Through the Coolest Projects online showcase, young people get the opportunity to explore their creativity and realise their tech ambitions! Whatever they come up with as a project idea, we want them to register so the Coolest Projects community can celebrate it.

To help you support young people to create their projects, we’re running a free online workshop called ‘How to design projects with young people’ on 25 February.

What happens next? 

  1. Once their project ideas are registered, the young people can start creating their projects!
  2. From the start of March, they will be able to complete their registration by adding the details of their project, including either a Scratch project link or a short video where they need to answer three important questions about their project. We’ll be offering online sessions to give them tips for their video and help them complete their showcase gallery entry.
  3. Project registration closes on 3 May. But don’t worry if a project isn’t finished by then: we welcome works in progress just as much as completed creations!

We can’t wait to see the wonderful, imaginative things young tech creators in this global community are going to share with the world!

Sign up for the Coolest Projects newsletter to never miss the latest updates about our exciting online showcase, including the free online support sessions for participants.

The post Idea registration is open for Coolest Project 2021! appeared first on Raspberry Pi.

How to blink an LED with Raspberry Pi Pico in C

Par Alasdair Allan

The new Raspberry Pi Pico is very different from a traditional Raspberry Pi. Pico is a microcontroller, rather than a microcomputer. Unlike a Raspberry Pi it’s a platform you develop for, not a platform you develop on.

Blinking the on-board LED
Blinking the onboard LED

But you still have choices if you want to develop for Pico, because there is both a C/C++ SDK and an official MicroPython port. Beyond that there are other options opening up, with a port of CircuitPython from Adafruit and the prospect of Arduino support, or even a Rust port.

Here I’m going to talk about how to get started with the C/C++ SDK, which lets you develop for Raspberry Pi Pico from your laptop or Raspberry Pi.

I’m going to assume you’re using a Raspberry Pi; after all, why wouldn’t you want to do that? But if you want to develop for Pico from your Windows or Mac laptop, you’ll find full instructions on how to do that in our Getting Started guide.

Blinking your first LED

When you’re writing software for hardware, the first program that gets run in a new programming environment is typically turning an LED on, off, and then on again. Learning how to blink an LED gets you halfway to anywhere. We’re going to go ahead and blink the onboard LED on Pico, which is connected to pin 25 of the RP2040 chip.

We’ve tried to make getting started with Raspberry Pi Pico as easy as possible. In fact, we’ve provided some pre-built binaries that you can just drag and drop onto your Raspberry Pi Pico to make sure everything is working even before you start writing your own code.

Go to the Getting Started page and click on the “Getting started with C/C++” tab, then the “Download UF2 file” button in the “Blink an LED” box.

Getting started with Raspberry Pi Pico

A file called blink.uf2 will be downloaded to your computer. Go grab your Raspberry Pi Pico board and a micro USB cable. Plug the cable into your Raspberry Pi or laptop, then press and hold the BOOTSEL button on your Pico while you plug the other end of the micro USB cable into the board. Then release the button after the board is plugged in.

A disk volume called RPI-RP2 should pop up on your desktop. Double-click to open it, and then drag and drop the UF2 file into it. The volume will automatically unmount and the light on your board should start blinking.

Blinking an LED

Congratulations! You’ve just put code onto your Raspberry Pi Pico for the first time. Now we’ve made sure that we can successfully get a program onto the board, let’s take a step back and look at how we’d write that program in the first place.

Getting the SDK

Somewhat unsurprisingly, we’ve gone to a lot of trouble to make installing the tools you’ll need to develop for Pico as easy as possible on a Raspberry Pi. We’re hoping to make things easier still in the future, but you should be able to install everything you need by running a setup script.

However, before we do anything, the first thing you’ll need to do is make sure your operating system is up to date.

$ sudo apt update
$ sudo apt full-upgrade

Once that’s complete you can grab the setup script directly from Github, and run it at the command line.

$ wget -O pico_setup.sh https://rptl.io/pico-setup-script
$ chmod +x pico_setup.sh
$ ./pico_setup.sh

The script will do a lot of things behind the scenes to configure your Raspberry Pi for development, including installing the C/C++ command line toolchain and Visual Studio Code. Once it has run, you will need to reboot your Raspberry Pi.

$ sudo reboot

The script has been tested and is known to work from a clean, up-to-date installation of Raspberry Pi OS. However, full instructions, along with instructions for manual installation of the toolchain if you prefer to do that, can be found in the “Getting Started” guide.

Once your Raspberry Pi has rebooted we can get started writing code.

Writing code for your Pico

There is a large amount of example code for Pico, and one of the things that the setup script will have done is to download the examples and build both the Blink and “Hello World” examples to verify that your toolchain is working.

But we’re going to go ahead and write our own.

We’re going to be working in the ~/pico directory created by the setup script, and the first thing we need to do is to create a directory to house our project.

$ cd pico
$ ls -la
total 59284
drwxr-xr-x  9 pi pi     4096 Jan 28 10:26 .
drwxr-xr-x 19 pi pi     4096 Jan 28 10:29 ..
drwxr-xr-x 12 pi pi     4096 Jan 28 10:24 openocd
drwxr-xr-x 28 pi pi     4096 Jan 28 10:20 pico-examples
drwxr-xr-x  7 pi pi     4096 Jan 28 10:20 pico-extras
drwxr-xr-x 10 pi pi     4096 Jan 28 10:20 pico-playground
drwxr-xr-x  5 pi pi     4096 Jan 28 10:21 picoprobe
drwxr-xr-x 10 pi pi     4096 Jan 28 10:19 pico-sdk
drwxr-xr-x  7 pi pi     4096 Jan 28 10:22 picotool
-rw-r--r--  1 pi pi 60667760 Dec 16 16:36 vscode.deb
$ mkdir blink
$ cd blink

Now open up your favourite editor and create a file called blink.c in the blink directory.

#include "pico/stdlib.h"
#include "pico/binary_info.h"

const uint LED_PIN = 25;

int main() {

    bi_decl(bi_program_description("First Blink"));
    bi_decl(bi_1pin_with_name(LED_PIN, "On-board LED"));

    gpio_init(LED_PIN);
    gpio_set_dir(LED_PIN, GPIO_OUT);
    while (1) {
        gpio_put(LED_PIN, 0);
        sleep_ms(250);
        gpio_put(LED_PIN, 1);
        sleep_ms(1000);
    }
}

Create a CMakeLists.txt file too.

cmake_minimum_required(VERSION 3.12)

include(pico_sdk_import.cmake)

project(blink)

pico_sdk_init()

add_executable(blink
    blink.c
)

pico_add_extra_outputs(blink)

target_link_libraries(blink pico_stdlib)

Then copy the pico_sdk_import.cmake file from the external folder in your pico-sdk installation to your test project folder.

$ cp ../pico-sdk/external/pico_sdk_import.cmake .

You should now have something that looks like this:

$ ls -la
total 20
drwxr-xr-x  2 pi pi 4096 Jan 28 11:32 .
drwxr-xr-x 10 pi pi 4096 Jan 28 11:01 ..
-rw-r--r--  1 pi pi  398 Jan 28 11:06 blink.c
-rw-r--r--  1 pi pi  211 Jan 28 11:32 CMakeLists.txt
-rw-r--r--  1 pi pi 2721 Jan 28 11:32 pico_sdk_import.cmake
$

We are ready to build our project using CMake.

$ mkdir build
$ cd build
$ export PICO_SDK_PATH=../../pico-sdk
$ cmake ..
$ make

If all goes well you should see a whole bunch of messages flash past in your Terminal window and a number of files will be generated in the build/ directory, including one called blink.uf2.

Just as we did before with the UF2 file we downloaded from the Getting Started page, we can now drag and drop this file on to our Pico.

Unplug the cable from your Pico, then press and hold the BOOTSEL button on your Pico and plug it back in. Then release the button after the board is plugged in.

The new Blink binary
The new blink.uf2 binary can be dragged and dropped on to our Pico

The RPI-RP2 disk volume should pop up on your desktop again. Double-click to open it, then open a file viewer in the pico/blink/build/ directory and drag and drop the UF2 file you’ll find there on to the RPI-RP2 volume. It will automatically unmount, and the light on your board should start blinking. But this time it will blink a little bit differently from before.

Try playing around with the sleep_ms( ) lines in our code to vary how much time there is between blinks. You could even take a peek at one of the examples, which shows you how to blink the onboard LED in Morse code.

Using Picotool

One way to convince yourself that the program running on your Pico is the one we just built is to use something called picotool. Picotool is a command line utility installed by the setup script that is a Swiss Army knife for all things Pico.

Go ahead and unplug your Pico from your Raspberry Pi, press and hold the BOOTSEL button, and plug it back in. Then run picotool.

$ sudo picotool info -a 
Program Information
 name:          blink
 description:   First Blink
 binary start:  0x10000000
 binary end:    0x10003344

Fixed Pin Information
 25:  On-board LED

Build Information
 sdk version:       1.0.0
 pico_board:        pico
 build date:        Jan 28 2021
 build attributes:  Release

Device Information
 flash size:   2048K
 ROM version:  1
$

You’ll see lots of information about the program currently on your Pico. Then if you want to start it blinking again, just unplug and replug Pico to leave BOOTSEL mode and start your program running once more.

Picotool can do a lot more than this, and you’ll find more information about it in Appendix B of the “Getting Started” guide.

Using Visual Studio Code

So far we’ve been building our Pico projects from the command line, but the setup script also installed and configured Visual Studio Code, and we can build the exact same CMake-based project in the Visual Studio Code environment. You can open it as below:

$ cd ~/pico
$ export PICO_SDK_PATH=/home/pi/pico/pico-sdk
$ code

Chapter 6 of the Getting Started guide has full details of how to load and compile a Pico project inside Visual Studio Code. If you’re used to Visual Studio Code, you might be able to make your way from here without much extra help, as the setup script has done most of the heavy lifting for you in configuring the IDE.

What’s left is to open the pico/blink folder and allow the CMake Tools extension to configure the project. After selecting arm-none-eabi as your compiler, just hit the “Build’ button in the blue bottom bar.

Building our blink project inside Visual Studio Code

While we recommend and support Visual Studio Code as the development environment of choice for developing for Pico — it works cross-platform under Linux, Windows, and macOS and has good plugin support for debugging — you can also take a look at Chapter 9 of the Getting Started guide. There we talk about how to use both Eclipse and CLion to develop for Pico, and if you’re more used to those environments you should be able to get up and running in either without much trouble.

Where now?

If you’ve got this far, you’ve built and deployed your very first C program to your Raspberry Pi Pico. Well done! The next step is probably going to be saying “Hello World!” over serial back to your Raspberry Pi.

From here, you probably want to sit down and read the Getting Started guide I’ve mentioned throughout the article, especially if you want to make use of SWD debugging, which is discussed at length in the guide. Beyond that I’d point you to the book on the C/C++ SDK which has the API-level documentation, as well as a high-level discussion of the design of the SDK.

Support for developing for Pico can be found on the Raspberry Pi forums. There is also an (unofficial) Discord server where a lot of people active in the new community seem to be hanging out. Feedback on the documentation should be posted as an Issue to the pico-feedback repository on Github, or directly to the relevant repository it concerns.

All of the documentation, along with lots of other help and links, can be found on the same Getting Started page from which we grabbed our original UF2 file.

If you lose track of where that is in the future, you can always find it from your Pico: to access the page, just press and hold the BOOTSEL button on your Pico, plug it into your laptop or Raspberry Pi, then release the button. Go ahead and open the RPI-RP2 volume, and then click on the INDEX.HTM file.

That will always take you to the Getting Started page.

The post How to blink an LED with Raspberry Pi Pico in C appeared first on Raspberry Pi.

Raspberry Pi engineers on the making of Raspberry Pi Pico | The MagPi 102

Par Gareth Halfacree

In the latest issue of The MagPi Magazine, on sale now, Gareth Halfacree asks what goes into making Raspberry Pi’s first in-house microcontroller and development board.

“It’s a flexible product and platform,” says Nick Francis, Senior Engineering Manager at Raspberry Pi, when discussing the work the Application-Specific Integrated Circuit (ASIC) team put into designing RP2040, the microcontroller at the heart of Raspberry Pi Pico

It would have been easy to have said, well, let’s do a purely educational microcontroller “quite low-level, quite limited performance,” he tells us. “But we’ve done the high-performance thing without forgetting about making it easy to use for beginners. To do that at this price point is really good.”

  • James Adams, Chief Operating Officer
  • Nick Francis, Senior Engineering Manager

“I think we’ve done a pretty good job,” agrees James Adams, Chief Operating Officer at Raspberry Pi. “We’ve obviously tossed around a lot of different ideas about what we could include along the way, and we’ve iterated quite a lot and got down to a good set of features.”

A board and chip

“The idea is it’s [Pico] a component in itself,” says James. “The intent was to expose as many of the I/O (input/output) pins for users as possible, and expose them in the DIP-like (Dual Inline Package) form factor, so you can use Raspberry Pi Pico as you might use an old 40-pin DIP chip. Now, Pico is 2.54 millimetres or 0.1 inch pitch wider than a ‘standard’ 40-pin DIP, so not exactly the same, but still very similar.

“After the first prototype, I changed the pins to be castellated so you can solder it down as a module, without needing to put any headers in. Which is, yes, another nod to using it as a component.”

Getting the price right

“One of the things that we’re very excited about is the price,” says James. “We’re able to make these available cheap as chips – for less than the price of a cup of coffee.”

“It’s extremely low-cost,” Nick agrees. “One of the driving requirements right at the start was to build a very low-cost chip, but which also had good performance. Typically, you’d expect a microcontroller with this specification to be more expensive, or one at this price to have a lower specification. We tried to push the performance and keep the cost down.”

“We’re able to make these available cheap as chips.”

James Adams

Raspberry Pi Pico also fits nicely into the Raspberry Pi ecosystem: “Most people are doing a lot of the software development for this, the SDK (software development kit) and all the rest of it, on Raspberry Pi 4 or Raspberry Pi 400,” James explains. “That’s our primary platform of choice. Of course, we’ll make it work on everything else as well. I would hope that it will be as easy to use as any other microcontroller platform out there.”

Eben Upton on RP2040

“RP2040 is an exciting development for Raspberry Pi because it’s Raspberry Pi people making silicon,” says Eben Upton, CEO and co-founder of Raspberry Pi. “I don’t think other people bring their A-game to making microcontrollers; this team really brought its A-game. I think it’s just beautiful.

Is Pico really that small, or is Eben a giant?

“What does Raspberry Pi do? Well, we make products which are high performance, which are cost-effective, and which are implemented with insanely high levels of engineering attention to detail – and this is that. This is that ethos, in the microcontroller space. And that couldn’t have been done with anyone else’s silicon.”

Issue #102 of The MagPi Magazine is out NOW

MagPi 102 cover

Never want to miss an issue? Subscribe to The MagPi and we’ll deliver every issue straight to your door. Also, if you’re a new subscriber and get the 12-month subscription, you’ll get a completely free Raspberry Pi Zero bundle with a Raspberry Pi Zero W and accessories.

The post Raspberry Pi engineers on the making of Raspberry Pi Pico | The MagPi 102 appeared first on Raspberry Pi.

❌