Lateo.net - Flux RSS en pagaille (pour en ajouter : @ moi)

🔒
❌ À propos de FreshRSS
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierRaspberry Pi

The Howff 3D scanning rig| The MagPi 99

How do you create a 3D model of a historic graveyard? With eight Raspberry Pi computers, as Rob Zwetsloot discovers in the latest issue of The MagPi magazine, out now.

The software builds up the 3D model of the graveyard

“In the city centre of Dundee is a historical burial ground, The Howff,” says Daniel Muirhead. We should probably clarify that he’s a 3D artist. “This old graveyard is densely packed with around 1500 gravestones and other funerary monuments, which happens to make it an excellent technical challenge for photogrammetry photo capture.”

This architecture, stone paths, and vibrant flora is why Daniel ended up creating a 3D-scanning rig out of eight Raspberry Pi computers. And the results are quite stunning.

Eight Raspberry Pi computers are mounted to the ball, with cameras pointing towards the ground

“The goal of this project was to capture photos for use in generating a 3D model of the ground,” he continues. “That model will be used as a base for attaching individual gravestone models and eventually building up a full composite model of this complex subject. The ground model will also be purposed for rendering an ultra-high-resolution map of the graveyard. The historical graveyard has a very active community group that are engaged in its study and digitisation, the Dundee Howff Conservation Group, so I will be sharing my digital outputs with them.”

Google graveyard

There are thousands of pictures, like this one, being used to create the model

To move the rig throughout the graveyard, Daniel used himself as the major moving part. With the eight Raspberry Pi cameras taking a photo every two seconds, he was able to capture over 180,000 photos over 13 hours of capture sessions.

“The rig was held above my head and the cameras were angled in such a way as to occlude me from view, so I was not captured in the photographs which instead were focused on the ground,” he explains. “Of the eight cameras, four were the regular model with 53.5 ° horizontal field of view (FoV), and the other four were a wide-angle model with 120 ° FoV. These were arranged on the rig pointing outwards in eight different directions, alternating regular and wide-angle, all angled at a similar pitch down towards the ground. During capture, the rig was rotated by +45 ° for every second position, so that the wide-angles were facing where the regulars had been facing on the previous capture, and vice versa.”
Daniel worked according to a very specific grid pattern, staying in one spot for five seconds at a time, with the hopes that at the end he’d have every patch of ground photographed from 16 different positions and angles.

Maker Daniel Muirhead is a 3D artist with an interest in historical architecture

“With a lot of photo data to scan through for something fairly complex, we wondered how well the system had worked. Daniel tells us the only problems he had were with some bug fixing on his code: “The images were separated into batches of around 10,000 (1250 photos from each of the eight cameras), plugged into the photogrammetry software, and the software had no problem in reconstructing the ground as a 3D model.”

Accessible 3D surveying

He’s now working towards making it accessible and low-cost to others that might want it. “Low-cost in the triple sense of financial, labour, and time,” he clarifies. “I have logged around 8000 hours in a variety of photogrammetry softwares, in the process capturing over 300,000 photos with a regular camera for use in such files, so I have some experience in this area.”

“With the current state of technology, it should be possible with around £1000 in equipment to perform a terrestrial photo-survey of a town centre in under an hour, then with a combined total of maybe three hours’ manual processing and 20 hours’ automated computer processing, generate a high-quality 3D model, the total production time being under 24 hours. It should be entirely plausible for a local community group to use such a method to perform weekly (or at least monthly) 3D snapshots of their town centre.”

The MagPi issue 99 – Out now

The MagPi magazine is out now, available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the PDF directly from the MagPi magazine website.

The post The Howff 3D scanning rig| The MagPi 99 appeared first on Raspberry Pi.

YouTuber Jeff Geerling reviews Raspberry Pi Compute Module 4

We love seeing how quickly our community of makers responds when we drop a new product, and one of the fastest off the starting block when we released the new Raspberry Pi Compute Module 4 last week was YouTuber Jeff Geerling.

Jeff Geerling

We made him keep it a secret until launch day after we snuck one to him early so we could see what one of YouTube’s chief advocates for our Compute Module line thought of our newest baby.

So how does our newest board compare to its predecessor, Compute Module 3+? In Jeff’s first video (above) he reviews some of Compute Module 4’s new features, and he has gone into tons more detail in this blog post.

Jeff also took to live stream for a Q&A (above) covering some of the most asked questions about Compute Module 4, and sharing some more features he missed in his initial review video.

His next video (above) is pretty cool. Jeff explains:

“Everyone knows you can overclock the Pi 4. But what happens when you overclock a Compute Module 4? The results surprised me!”

Jeff Geerling

And again, there’s tons more detail on temperature measurement, storage performance, and more on Jeff’s blog.

Top job, Jeff. We have our eyes on your channel for more videos on Compute Module 4, coming soon.

If you like what you see on his YouTube channel, you can also sponsor Jeff on GitHub, or support his work via Patreon.

The post YouTuber Jeff Geerling reviews Raspberry Pi Compute Module 4 appeared first on Raspberry Pi.

Digital making projects about protecting our planet

Par : Emma Posey

Explore our new free pathway of environmental digital making projects for young people! These new step-by-step projects teach learners Scratch coding and include real-world data — from data about the impact of deforestation on wildlife to sea turtle tracking information.

By following along with the digital making projects online, young people will discover how they can use technology to protect our planet, all while improving their computing skills.

Photo of a young woman holding an origami bird up to the camera
One of the new projects is an automatic creature counter based on colour recognition with Scratch

The projects help young people affect change

In the projects, learners are introduced to 5 of the United Nations’ 17 Sustainable Development Goals (SDGs) with an environment focus:

  • Affordable and Clean Energy
  • Responsible Consumption and Production
  • Climate Action
  • Life Below Water
  • Life on Land
Screenshot of a Scratch project showing a panda and the Earth
The first project in the new pathway is an animation about the UN’s five SDGs focused on the environment.

Technology, science, maths, geography, and design all play a part in the projects. Following along with the digital making projects, young people learn coding and computing skills while drawing on a range of data from across the world. In this way they will discover how computing can be harnessed to collect environmental data, to explore causes of environmental degradation, to see how humans influence the environment, and ultimately to mitigate negative effects.

Where does the real-world data come from?

To help us develop these environmental digital making projects, we reached out to a number of organisations with green credentials:

Green Sea Turtle Alasdair Davies Raspberry Pi
A sea turtle is being tagged so its movements can be tracked

Inspiring young people about coding with real-world data

The digital making projects, created with 9- to 11-year-old learners in mind, support young people on a step-by-step pathway to develop their skills gradually. Using the block-based visual programming language Scratch, learners build on programming foundations such as sequencing, loops, variables, and selection. The project pathway is designed so that learners can apply what they learned in earlier projects when following along with later projects!

The final project in the pathway, ‘Turtle tracker’, uses real-world data of migrating sea turtles!

We’re really excited to help learners explore the relationship between technology and the environment with these new digital making projects. Connecting their learning to real-world scenarios not only allows young people to build their knowledge of computing, but also gives them the opportunity to affect change and make a difference to their world!

Discover the new digital making projects yourself!

With Green goals, learners create an animation to present the United Nations’ environment-focused Sustainable Development Goals.

Through Save the shark, young people explore sharks’ favourite food source (fish, not humans!), as well as the impact of plastic in the sea, which harms sharks in their natural ocean habitat.

Illustration of a shark with sunglasses

With the Tree life simulator project guide, learners create a project that shows the impact of land management and deforestation on trees, wildlife, and the environment.

Computers can be used to study wildlife in areas where it’s not practical to do so in person. In Count the creatures, learners create a wildlife camera using their computer’s camera and Scratch’s new video sensing extension!

Electricity is important. After all, it powers the computer that learners are using! In Electricity generation, learners input real data about the type and amount of natural resources countries across the world use to generate electricity, and they then compare the results using an animated data visualisation.

Understanding the movements of endangered turtles helps to protect these wonderful animals. In this new Turtle tracker project, learners use tracking data from real-life turtles to map their movements off the coast of West Africa.

Code along wherever you are!

All of our projects are free to access online at any time and include step-by-step instructions. They can be undertaken in a club, classroom, or at home. Young people can share the project they create with their peers, friends, family, and the wider Scratch community.

Visit the Protect our planet pathway to experience the projects yourself.

The post Digital making projects about protecting our planet appeared first on Raspberry Pi.

Talk to your Raspberry Pi | HackSpace 36

In the latest issue of HackSpace Magazine, out now, @MrPJEvans shows you how to add voice commands to your projects with a Raspberry Pi 4 and a microphone.

You’ll need:

It’s amazing how we’ve come from everything being keyboard-based to so much voice control in our lives. Siri, Alexa, and Cortana are everywhere and happy to answer questions, play you music, or help automate your household.

For the keen maker, these offerings may not be ideal for augmenting their latest project as they are closed systems. The good news is, with a bit of help from Google, you can add voice recognition to your project and have complete control over what happens. You just need a Raspberry Pi 4, a speaker array, and a Google account to get started.

Set up your microphone

This clever speaker uses four microphones working together to increase accuracy. A ring of twelve RGB LEDs can be coded to react to events, just like an Amazon Echo

For a home assistant device, being able to hear you clearly is an essential. Many microphones are either too low-quality for the task, or are unidirectional: they only hear well in one direction. To the rescue comes Seeed’s ReSpeaker, an array of four microphones with some clever digital processing to provide the kind of listening capability normally found on an Amazon Echo device or Google Assistant. It’s also in a convenient HAT form factor, and comes with a ring of twelve RGB LEDs, so you can add visual effects too. Start with a Raspberry Pi OS Lite installation, and follow these instructions to get your ReSpeaker ready for use.

Install Snowboy

You’ll see later on that we can add the power of Google’s speech-to-text API by streaming audio over the internet. However, we don’t want to be doing that all the time. Snowboy is an offline ‘hotword’ detector. We can have Snowboy running all the time, and when your choice of word is ‘heard’, we switch to Google’s system for accurate processing. Snowboy can only handle a few words, so we only use it for the ‘trigger’ words. It’s not the friendliest of installations so, to get you up and running, we’ve provided step-by-step instructions.

There’s also a two-microphone ReSpeaker for the Raspberry Pi Zero

Create your own hotword

As we’ve just mentioned, we can have a hotword (or trigger word) to activate full speech recognition so we can stay offline. To do this, Snowboy must be trained to understand the word chosen. The code that describes the word (and specifically your pronunciation of it) is called the model. Luckily, this whole process is handled for you at snowboy.kitt.ai, where you can create a model file in a matter of minutes and download it. Just say your choice of words three times, and you’re done. Transfer the model to your Raspberry Pi 4 and place it in your home directory.

Let’s go Google

ReSpeaker can use its multiple mics to detect distance and direction

After the trigger word is heard, we want Google’s fleet of super-servers to help us transcribe what is being said. To use Google’s speech-to-text API, you will need to create a Google application and give it permissions to use the API. When you create the application, you will be given the opportunity to download ‘credentials’ (a small text file) which will allow your setup to use the Google API. Please note that you will need a billable account for this, although you get one hour of free speech-to-text per month. Full instructions on how to get set up can be found here.

Install the SDK and transcriber

To use Google’s API, we need to install the firm’s speech-to-text SDK for Python so we can stream audio and get the results. On the command line, run the following:pip3 install google-cloud-speech
(If you get an error, run sudo apt install python3-pip then try again).
Remember that credentials file? We need to tell the SDK where it is:
export GOOGLE_APPLICATION_CREDENTIALS="/home/pi/[FILE_NAME].json"
(Don’t forget to replace [FILE_NAME] with the actual name of the JSON file.)
Now download and run this test file. Try saying something and see what happens!

Putting it all together

Now we can talk to our Raspberry Pi, it’s time to link the hotword system to the Google transcription service to create our very own virtual assistant. We’ve provided sample code so that you can see these two systems running together. Run it, then say your chosen hotword. Now ask ‘what time is it?’ to get a response. (Don’t forget to connect a speaker to the audio output if you’re not using HDMI.) Now it’s over to you. Try adding code to respond to certain commands such as ‘turn the light on’, or ‘what time is it?’

Get HackSpace magazine 36 Out Now!

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

The post Talk to your Raspberry Pi | HackSpace 36 appeared first on Raspberry Pi.

Take part in the PA Raspberry Pi Competition for UK schools

Every year, we are proud to judge at the PA Raspberry Pi Competition for UK schools, run by PA Consulting. In this free competition, teams of students from schools all over the UK imagine, design, and create Raspberry Pi–powered inventions.

Female engineer with Raspberry Pi device. Copyright © University of Southampton
Let’s inspire young people to take up a career in STEM!
© University of Southampton

The PA Raspberry Pi Competition aims to inspire young people aged 8 to 18 to learn STEM skills, teamwork, and creativity, and to move toward a career in STEM.

We invite all UK teachers to register if you have students at your school who would love to take part!

For the first 100 teams to complete registration and submit their entry form, PA Consulting provides a free Raspberry Pi Starter Kit to create their invention.

This year’s competition theme: Innovating for a better world

The theme is deliberately broad so that teams can show off their creativity and ingenuity.

  • All learners aged 8 to 18 can take part, and projects are judged in four age groups
  • The judging categories include team passion; simplicity and clarity of build instructions; world benefit; and commercial potential
  • The proposed budget for a team’s invention is around £100
  • The projects can be part of your students’ coursework
  • Entries must be submitted by Monday 22 March 2021
  • You’ll find more details and inspiration on the PA Raspberry Pi Competition webpage

Among all the entries, judges from the tech sector and the Raspberry Pi Foundation choose the finalists with the most outstanding inventions in their age group.

The Dynamix team, finalists in last round’s Y4–6 group, built a project called SmartRoad+

The final teams get to take part in an exciting awards event to present their creations so that the final winners can be selected. This round’s PA Raspberry Pi Awards Ceremony takes place on Wednesday 28 April 2021, and PA Consulting are currently considering whether this will be a physical or virtual event.

All teams that participate in the competition will be rewarded with certificates, and there’s of course the chance to win trophies and prizes too!

You can prepare with our free online courses

If you would like to boost your skills so you can better support your team, then sign up to one of our free online courses designed for educators:

Take inspiration from the winners of the previous round

All entries are welcome, no matter what your students’ experience is! Here are the outstanding projects from last year’s competition:

A look inside the air quality-monitoring project by Team Tempest, last round’s winners in the Y7–9 group

Find out more at the PA Raspberry Pi Competition webinar!

To support teachers in guiding their teams through the competition, PA Consulting will hold a webinar on 12 November 2020 at 4.30–5.30pm. Sign up to hear first-hand what’s involved in taking part in the PA Raspberry Pi Competition, and use the opportunity to ask questions!

The post Take part in the PA Raspberry Pi Competition for UK schools appeared first on Raspberry Pi.

New book: Create Graphical User Interfaces with Python

Laura Sach and Martin O’Hanlon, who are both Learning Managers at the Raspberry Pi Foundation, have written a brand-new book to help you to get more out of your Python projects.

Cover of the book Create Graphical User Interfaces with Python

In Create Graphical User Interfaces with Python, Laura and Martin show you how to add buttons, boxes, pictures, colours, and more to your Python programs using the guizero library, which is easy to use and accessible for all, no matter your Python skills.

This new 156-page book is suitable for everyone — from beginners to experienced Python programmers — who wants to explore graphical user interfaces (GUIs).

Meet the authors

Screenshot of a Digital Making at Home live stream session
That’s Martin in the blue T-shirt with our Digital Making at Home live stream hosts Matt and Christina

You might have met Martin recently on one of our weekly Digital Making at Home live streams for young people, were he was a guest for an ‘ooey-GUI’ code-along session. He talked about his background and what it’s like creating projects and learning resources on a day-to-day basis.

Laura is also pretty cool! Here she is showing you how to solder your Raspberry Pi header pins:

Hi Laura!

Martin and Laura are also tonnes of fun on Twitter. You can find Martin as @martinohanlon, and Laura goes by @codeboom.

10 fun projects

In Create Graphical User Interfaces with Python, you’ll find ten fun Python projects to create with guizero, including a painting program, an emoji match game, and a stop-motion animation creator.

A double-page from the book Create Graphical User Interfaces with Python
A peek inside Laura’s and Martin’s new book

You will also learn:

  • How to create fun Python games and programs
  • How to code your own graphical user interfaces using windows, text boxes, buttons, images, and more
  • What event-based programming is
  • What good (and bad) user interface design is
A double-page from the book Create Graphical User Interfaces with Python
Ain’t it pretty?

Where can I get it?

You can buy Create Graphical User Interfaces with Python now from the Raspberry Pi Press online store, or the Raspberry Pi store in Cambridge, UK.

And if you don’t need the lovely new book, with its new-book smell, in your hands in real life, you can download a PDF version for free, courtesy of The MagPi magazine.

The post New book: Create Graphical User Interfaces with Python appeared first on Raspberry Pi.

Formative assessment in the computer science classroom

In computing education research, considerable focus has been put on the design of teaching materials and learning resources, and investigating how young people learn computing concepts. But there has been less focus on assessment, particularly assessment for learning, which is called formative assessment. As classroom teachers are engaged in assessment activities all the time, it’s pretty strange that researchers in the area of computing and computer science in school have not put a lot of focus on this.

Shuchi Grover

That’s why in our most recent seminar, we were delighted to hear about formative assessment — assessment for learning — from Dr Shuchi Grover, of Looking Glass Ventures and Stanford University in the USA. Shuchi has a long track record of work in the learning sciences (called education research in the UK), and her contribution in the area of computational thinking has been hugely influential and widely drawn on in subsequent research.

Two types of assessment

Assessment is typically divided into two types:

  1. Summative assessment (i.e. assessing what has been learned), which typically takes place through examinations, final coursework, projects, etcetera.
  2. Formative assessment (i.e. assessment for learning), which is not aimed at giving grades and typically takes place through questioning, observation, plenary classroom activities, and dialogue with students.

Through formative assessment, teachers seek to find out where students are at, in order to use that information both to direct their preparation for the next teaching activities and to give students useful feedback to help them progress. Formative assessment can be used to surface misconceptions (or alternate conceptions) and for diagnosis of student difficulties.

Venn diagram of how formative assessment practices intersect with teacher knowledge and skills
Click to enlarge

As Shuchi outlined in her talk, a variety of activities can be used for formative assessment, for example:

  • Self- and peer-assessment activities (commonly used in schools).
  • Different forms of questioning and quizzes to support learning (not graded tests).
  • Rubrics and self-explanations (for assessing projects).

A framework for formative assessment

Shuchi described her own research in this topic, including a framework she has developed for formative assessment. This comprises three pillars:

  1. Assessment design.
  2. Teacher or classroom practice.
  3. The role of the community in furthering assessment practice.
Shuchi Grover's framework for formative assessment
Click to enlarge

Shuchi’s presentation then focused on part of the first pillar in the framework: types of assessments, and particularly types of multiple-choice questions that can be automatically marked or graded using software tools. Tools obviously don’t replace teachers, but they can be really useful for providing timely and short-turnaround feedback for students.

As part of formative assessment, carefully chosen questions can also be used to reveal students’ misconceptions about the subject matter — these are called diagnostic questions. Shuchi discussed how in a classroom setting, teachers can employ this kind of question to help them decide what to focus on in future lessons, and to understand their students’ alternate or different conceptions of a topic. 

Formative assessment of programming skills

The remainder of the seminar focused on the formative assessment of programming skills. There are many ways of assessing developing programming skills (see Shuchi’s slides), including Parsons problems, microworlds, hotspot items, rubrics (for artifacts), and multiple-choice questions. As an MCQ example, in the figure below you can see some snippets of block-based code, which students need to read and work out what the outcome of running the snippets will be. 

Click to enlarge

Questions such as this highlight that it’s important for learners to engage in code comprehension and code reading activities when learning to program. This really underlines the fact that such assessment exercises can be used to support learning just as much as to monitor progress.

Formative assessment: our support for teachers

Interestingly, Shuchi commented that in her experience, teachers in the UK are more used to using code reading activities than US teachers. This may be because code comprehension activities are embedded into the curriculum materials and support for pedagogy, both of which the Raspberry Pi Foundation developed as part of the National Centre for Computing Education in England. We explicitly share approaches to teaching programming that incorporate code reading, for example the PRIMM approach. Moreover, our work in the Raspberry Pi Foundation includes the Isaac Computer Science online learning platform for A level computer science students and teachers, which is centered around different types of questions designed as tools for learning.

All these materials are freely available to teachers wherever they are based.

Further work on formative assessment

Based on her work in US classrooms researching this topic, Shuchi’s call to action for teachers was to pay attention to formative assessment in computer science classrooms and to investigate what useful tools can support them to give feedback to students about their learning. 

Advice from Shuchi Grover on how to embed formative assessment in classroom practice
Click to enlarge

Shuchi is currently involved in an NSF-funded research project called CS Assess to further develop formative assessment in computer science via a community of educators. For further reading, there are two chapters related to formative assessment in computer science classrooms in the recently published book Computer Science in K-12 edited by Shuchi.

There was much to take away from this seminar, and we are really grateful to Shuchi for her input and look forward to hearing more about her developing project.

Join our next seminar

If you missed the seminar, you can find the presentation slides and a recording of the Shuchi’s talk on our seminars page.

In our next seminar on Tuesday 3 November at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PT / 18:00–19:30 CEST, I will be presenting my work on PRIMM, particularly focusing on language and talk in programming lessons. To join, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended this past seminar, the link remains the same.

The post Formative assessment in the computer science classroom appeared first on Raspberry Pi.

New Chair and Trustees of the Raspberry Pi Foundation

I am delighted to share the news that we have appointed a new Chair and Trustees of the Raspberry Pi Foundation. Between them, they bring an enormous range of experience and expertise to what is already a fantastic Board of Trustees, and I am really looking forward to working with them.

  • John Lazar
    John Lazar
  • Amali de Alwis
    Amali de Alwis
  • Charles Leadbeater
    Charles Leadbeater
  • Dan Labbad
    Dan Labbad

New Chair of the Board of Trustees: John Lazar 

John Lazar has been appointed as the new Chair of the Board of Trustees. John is a software engineer and business leader who is focused on combining technology and entrepreneurship to generate lasting positive impact.

Formerly the Chairman and CEO of Metaswitch Networks, John is now an angel investor, startup mentor, non-executive chairman and board director, including serving as the Chair of What3Words. He is a Fellow of the Royal Academy of Engineering and played an active role in developing the programme of study for England’s school Computer Science curriculum. John has also spent many years working on tech-related non-profit initiatives in Africa and co-founded Enza Capital, which invests in early-stage African technology companies that solve pressing problems.

John takes over the Chair from David Cleevely, who has reached the end of his two three-year terms as Trustee and Chair of the Foundation. David has made a huge contribution to the Foundation over that time, and we are delighted that he will continue to be involved in our work as one of the founding members of the Supporters Club.

New Trustees: Amali de Alwis, Charles Leadbeater, Dan Labbad

Alongside John, we are welcoming three new Trustees to the Board of Trustees: 

  • Amali de Alwis is the UK Managing Director of Microsoft for Startups, and is the former CEO of Code First: Girls. She is also a Board member at Ada National College for Digital Skills, sits on the Diversity & Inclusion Board at the Institute of Coding, is an Advisory Board member at the Founders Academy, and was a founding member at Tech Talent Charter.
  • Charles Leadbeater is an independent author, a social entrepreneur, and a leading authority on innovation and creativity. He has advised companies, cities, and governments around the world on innovation strategy and has researched and written extensively on innovation in education. Charles is also a Trustee of the Paul Hamlyn Foundation.
  • Dan Labbad is Chief Executive and Executive Member of the Board of The Crown Estate. He was previously at Lendlease, where he was Chief Executive Officer of Europe from 2009. Dan is also a Director of The Hornery Institute and Ark Schools.

New Member: Suranga Chandratillake 

I am also delighted to announce that we have appointed Suranga Chandratillake as a Member of the Raspberry Pi Foundation. Suranga is a technologist, entrepreneur, and investor.

Suranga Chandratillake

He founded the intelligent search company blinkx and is now a General Partner at Balderton Capital. Suranga is a Fellow of the Royal Academy of Engineering and a World Economic Forum Young Global Leader, and he serves on the UK Government’s Council for Science and Technology.

What is a Board of Trustees anyway? 

As a charity, the Raspberry Pi Foundation is governed by a Board of Trustees that is ultimately responsible for what we do and how we are run. It is the Trustees’ job to make sure that we are focused on our mission, which for us means helping more people learn about computing, computer science, and related subjects. The Trustees also have all the usual responsibilities of company directors, including making sure that we use our resources effectively. As Chief Executive, I am accountable to the Board of Trustees. 

We’ve always been fortunate to attract the most amazing people to serve as Trustees and, as volunteers, they are incredibly generous with their time, sharing their expertise and experience on a wide range of issues. They are an important part of the team. Trustees serve for up to two terms of three years so that we always have fresh views and experience to draw on.

How do you appoint Trustees? 

Appointments to the Board of Trustees follow open recruitment and selection processes that are overseen by the Foundation’s Nominations Committee, supported by independent external advisers. Our aim is to appoint Trustees who bring different backgrounds, perspectives, and lived experience, as well as a range of skills. As with all appointments, we consider diversity at every aspect of the recruitment and selection processes.

Formally, Trustees are elected by the Foundation’s Members at our Annual General Meeting. This year’s AGM took place last week on Zoom. Members are also volunteers, and they play an important role in holding the Board of Trustees to account, helping to shape our strategy, and acting as advocates for our mission.

You can see the full list of Trustees and Members on our website.

The post New Chair and Trustees of the Raspberry Pi Foundation appeared first on Raspberry Pi.

Designing the Raspberry Pi Compute Module 4

Par : Alex Bate

Raspberry Pi Compute Module 4 designer Dominic Plunkett was kind enough to let us sit him down for a talk with Eben, before writing up his experience of bringing our latest board to life for today’s blog post. Enjoy.

When I joined Raspberry Pi, James, Eben and Gordon already had some ideas on the features they would like to see on the new Compute Module 4, and it was down to me to take these ideas and turn them into a product. Many people think design is a nice linear process: ideas, schematics, PCB, and then final product. In the real world the design process isn’t like this, and to get the best designs I often try something and iterate around the design loop to get the best possible solution within the constraints.

Form factor change

Previous Compute Modules were all in a 200-pin SODIMM form factor, but two important considerations pushed us to think about moving to a different form factor: the need to expose useful interfaces of the BCM2711 that are not present in earlier SoCs, and the desire to add extra components, which meant we needed to route tracks differently to make space on the PCB for the additional parts.

Breaking out BCM2711’s high-speed interfaces

We knew we wanted to get the extra features of the BCM2711 out to the connector so that users could make use of them in their products. High-speed interfaces like PCIe and HDMI are so fast coming out of the BCM2711 that they need special IO pins that can’t also support GPIO: if we were to change the functionality of a GPIO pin to one of the new high-speed signals, this would break backwards compatibility.

We could consider adding some sort of multiplexer to swap between old and new functionality, but this would cost space on the PCB, as well as reducing the integrity of the fast signals. This consideration alone drives the design to a new pinout. We could have tried to use one of the SODIMM connectors with extra pins; while this would give a board with similar dimensions to the existing Compute Modules, it too would break compatibility.

Compute Module 4 mounted on the IO Board
Compute Module 4 mounted on the IO Board

PCB space for additional components

We also wanted to add extra items to the PCB, so PCB space to put the additional parts was an important consideration. If you look carefully at a Compute Module 3 you can see a lot of tracks carrying signals from one side of the SoC to the pins on the edge connector. These tracks take up valuable PCB space, preventing components being fitted there. We could add extra PCB layers to move these tracks from an outer layer to an inner layer, but these extra layers add to the cost of the product.

This was one of the main drivers in changing to having two connectors on different edges of the board: doing so saves having to route tracks all the way across the PCB. So we arrived at a design that incorporated a rough split of which signals were going to end up on each of the connectors. The exact order of the signals wasn’t yet defined.

Trial PCB layouts

We experimented with trial PCB layouts for the Compute Module 4 and the CM4 IO Board to see how easy it would be to route the signals; even at this stage, the final size of the CM4 hadn’t been fixed. Over time, and after juggling parts around the PCB, I came to a sensible compromise. There were lots of things to consider, including the fact that the taller components had to go on the top side of the PCB.

The pinout was constantly being adjusted to an ordering that was a good compromise for both the CM4 and the IO Board. The IO Board layout was a really important consideration: after we made the first prototype boards, we decided to change the pinout slightly to make PCB layout on the IO Board even easier for the end user.

When the prototype Compute Module 4 IO Boards arrived back from manufacture, the connectors hadn’t arrived in time to be assembled by machine, so I fitted them by hand in the lab. Pro tip: if you have to fit connectors by hand, take your time to ensure they are lined up correctly, and use lots of flux to help the solder flow into the joints. Sometimes people use very small soldering iron tips thinking it will help; in fact, one of the goals of soldering is to get heat into the joint, and if the tip is too small it will be difficult to heat the solder joint sufficiently to make a good connection.

Compute Module 4 IO Board

New features

Whilst it was easy to add some headline features like a second HDMI port, other useful features don’t grab as much attention. One example is that we have simplified the powering requirements. Previous Compute Modules required multiple PSUs to power a board, and the power-up sequence had to be exactly correct. Compute Module 4 simply requires a single +5V PSU.

In fact, the simplest possible base board for Compute Module 4 just requires a +5V supply and one of the connectors and nothing else. You would need a CM4 variant with eMMC and wireless connectivity; you can boot the module with the eMMC, wireless connectivity gives you networking, and Bluetooth connectivity gives you access to IO devices. If you do add extra IO devices the CM4 also can provide a +3.3V supply to power those devices, avoiding the need for an external power supply.

We have seen some customers experience issues with adding wireless interfaces to previous Compute Modules, so a really important requirement was to provide the option of wireless support. We wanted to be as flexible as possible, so we have added support for an external antenna. Because radio certification can be a very hard and expensive process, we have a pre-certified external antenna kit that can be supplied with Compute Module 4. This should greatly simplify product certification for end products, although engineering designers should check to make certain of meeting all local requirements.

Antenna Kit and Compute Module 4

PCIe

This is probably the most exciting new interface to come to Compute Module 4. On the existing Raspberry Pi 4, this interface is used internally to add the XHCI controller which provides the USB 3 ports. By providing the PCIe externally, we are giving end users the choice of how they would like to use this interface. Many applications don’t need USB 3 performance, so the end user can make use of it in other ways — for NVMe drives, to take one example.

Ethernet

In order to have wired Ethernet connectivity with previous Compute Modules, you needed to add an external USB-to-Ethernet interface. This adds complexity to the IO board, and one of the aims of the new Compute Module 4 is to make interfacing to it simple. With this in mind, we added a physical Ethernet interface to CM4, and we also took the opportunity to add support for IEEE1588 to this. As a result, adding Gigabit wired networking to CM4 requires only the addition of a magjack; no extra silicon is needed. Because this is a true Gigabit interface, it is also faster than the USB-to-Ethernet interfaces that previous Compute Modules use.

Raspberry Pi Compute Module 4

Open-sourcing the Compute Module 4 IO Board design files

Early on in the process, we decided that we were going to open-source the design files for the Compute Module 4 IO Board. We used our big expensive CAD system for Compute Module 4 itself, and while we could have decided to do the design for the IO Board in the big CAD system too and then port it across to KiCAD, it’s easy to introduce issues in the porting process.

So, instead, we used KiCAD for the IO Board from the start, and the design files that come out of KiCAD are the same ones that we use in manufacture. During development I had both CAD systems running at the same time on the computer.

Easier integration and enhanced possibilities

We have made some big changes to our new Compute Module 4 range, and these should make integration much simpler for our customers. Many interfaces now just need a connector and power, and the new form factor should enable people to design more compact and more powerful products. I look forward to seeing what our customers create over the next few years with Compute Module 4.

High-density connector on board underside

Get your Compute Module 4

The new Raspberry Pi Compute Module 4 is available from our network of Approved Resellers. Head over to the Compute Module 4 product page and select your preferred variant to find your nearest reseller.

Can’t find a reseller near you? No worries. Many of our Approved Resellers ship internationally, so try a few other locations.

The post Designing the Raspberry Pi Compute Module 4 appeared first on Raspberry Pi.

Vulkan update: merged to Mesa

Par : Eben Upton

Today we have another guest post from Igalia’s Iago Toral, who has spent the past year working on the Mesa graphic driver stack for Raspberry Pi 4.

Four months ago we announced that work on the Vulkan effort for Raspberry Pi 4 (v3dv) was progressing well, and that we were moving the development to an open repository.

vkQuake3 on Raspberry Pi 4

This week, the Vulkan driver for Raspberry Pi 4 has been merged with Mesa upstream, becoming one of the official Vulkan Mesa drivers. This brings several advantages:

  • Easier to find: now anyone willing to test the driver just needs to go to the official Mesa repository
  • Bug tracking: issues/bugs can now be filed on the official Mesa repository bug tracker. If the problem affects other parts of the project, it will be easier for us to involve other Mesa developers.
  • Releasing: v3dv will be included in all Mesa releases. In due course, you will no longer need to go to an external repository to obtain the driver, as it will be included in the Mesa package for your distribution.
  • Maintenance: v3dv will be included in the Mesa Continuous Integration system, so every merge request will be tested to ensure that our driver still builds. More effort can go to new features and bug fixes rather than just keeping up with upstream changes.

Progress, and current status

We said back in June that we were passing over 70,000 tests from the Khronos Conformance Test Suite for Vulkan 1.0, and that we had an implementation for a significant subset of the Vulkan 1.0 API. Now we are passing over 100,000 tests, and have implemented the full Vulkan 1.0 API. Only a handful of CTS tests remain to be fixed.

Sascha Willems’ deferred multisampling demo

This doesn’t mean that our work is done, of course. Although the CTS is a really complete test suite, it is not the same as a real use case. As mentioned some of our updates, we have been testing the driver with Vulkan ports of the original Quake trilogy, but deeper and more detailed testing is needed. So the next step will be to test the driver with more use cases, and fixing any bugs or performance issues that we find during the process.

The post Vulkan update: merged to Mesa appeared first on Raspberry Pi.

Raspberry Pi Compute Module 4 on sale now from $25

Par : Eben Upton

It’s become a tradition that we follow each Raspberry Pi model with a system-on-module variant based on the same core silicon. Raspberry Pi 1 gave rise to the original Compute Module in 2014; Raspberry Pi 3 and 3+ were followed by Compute Module 3 and 3+ in 2017 and 2019 respectively. Only Raspberry Pi 2, our shortest-lived flagship product at just thirteen months, escaped the Compute Module treatment.

It’s been sixteen months since we unleashed Raspberry Pi 4 on the world, and today we’re announcing the launch of Compute Module 4, starting from $25.

Over half of the seven million Raspberry Pi units we sell each year go into industrial and commercial applications, from digital signage to thin clients to process automation. Many of these applications use the familiar single-board Raspberry Pi, but for users who want a more compact or custom form factor, or on-board eMMC storage, Compute Module products provide a simple way to move from a Raspberry Pi-based prototype to volume production.

A step change in performance

Built on the same 64-bit quad-core BCM2711 application processor as Raspberry Pi 4, our Compute Module 4 delivers a step change in performance over its predecessors: faster CPU cores, better multimedia, more interfacing capabilities, and, for the first time, a choice of RAM densities and a wireless connectivity option.

Raspberry Pi Compute Module 4
Raspberry Pi Compute Module 4

You can find detailed specs here, but let’s run through the highlights:

  • 1.5GHz quad-core 64-bit ARM Cortex-A72 CPU
  • VideoCore VI graphics, supporting OpenGL ES 3.x
  • 4Kp60 hardware decode of H.265 (HEVC) video
  • 1080p60 hardware decode, and 1080p30 hardware encode of H.264 (AVC) video
  • Dual HDMI interfaces, at resolutions up to 4K
  • Single-lane PCI Express 2.0 interface
  • Dual MIPI DSI display, and dual MIPI CSI-2 camera interfaces
  • 1GB, 2GB, 4GB or 8GB LPDDR4-3200 SDRAM
  • Optional 8GB, 16GB or 32GB eMMC Flash storage
  • Optional 2.4GHz and 5GHz IEEE 802.11b/g/n/ac wireless LAN and Bluetooth 5.0
  • Gigabit Ethernet PHY with IEEE 1588 support
  • 28 GPIO pins, with up to 6 × UART, 6 × I2C and 5 × SPI
Compute Module 4 Lite (without eMMC Flash memory)
Compute Module 4 Lite, our variant without eMMC Flash memory

New, more compact form factor

Compute Module 4 introduces a brand new form factor, and a compatibility break with earlier Compute Modules. Where previous modules adopted the JEDEC DDR2 SODIMM mechanical standard, with I/O signals on an edge connector, we now bring I/O signals to two high-density perpendicular connectors (one for power and low-speed interfaces, and one for high-speed interfaces).

This significantly reduces the overall footprint of the module on its carrier board, letting you achieve smaller form factors for your products.

High-density connector on board underside
High-density connector on board underside

32 variants

With four RAM options, four Flash options, and optional wireless connectivity, we have a total of 32 variants, with prices ranging from $25 (for the 1GB RAM, Lite, no wireless variant) to $90 (for the 8GB RAM, 32GB Flash, wireless variant).

We’re very pleased that the four variants with 1GB RAM and no wireless keep the same price points ($25, $30, $35, and $40) as their Compute Module 3+ equivalents: once again, we’ve managed to pack a lot more performance into the platform without increasing the price.

You can find the full price list in the Compute Module 4 product brief.

Compute Module 4 IO Board

To help you get started with Compute Module 4, we are also launching an updated IO Board. Like the IO boards for earlier Compute Module products, this breaks out all the interfaces from the Compute Module to standard connectors, providing a ready-made development platform and a starting point for your own designs.

Compute Module 4 IO Board
Compute Module 4 IO Board

The IO board provides:

  • Two full-size HDMI ports
  • Gigabit Ethernet jack
  • Two USB 2.0 ports
  • MicroSD card socket (only for use with Lite, no-eMMC Compute Module 4 variants)
  • PCI Express Gen 2 x1 socket
  • HAT footprint with 40-pin GPIO connector and PoE header
  • 12V input via barrel jack (supports up to 26V if PCIe unused)
  • Camera and display FPC connectors
  • Real-time clock with battery backup

CAD for the IO board is available in KiCad format. You may recall that a few years ago we made a donation to support improvements to KiCad’s differential pair routing and track length control features; now you can use this feature-rich, open-source PCB layout package to design your own Compute Module carrier board.

Compute Module 4 mounted on the IO Board
Compute Module 4 mounted on the IO Board

In addition to serving as a development platform and reference design, we expect the IO board to be a finished product in its own right: if you require a Raspberry Pi that supports a wider range of input voltages, has all its major connectors in a single plane, or allows you to attach your own PCI Express devices, then Compute Module 4 with the IO Board does what you need.

We’ve set the price of the bare IO board at just $35, so a complete package including a Compute Module starts from $60.

Compute Module 4 Antenna Kit

We expect that most users of wireless Compute Module variants will be happy with the on-board PCB antenna. However, in some circumstances – for example, where the product is in a metal case, or where it is not possible to provide the necessary ground plane cut-out under the module – an external antenna will be required. The Compute Module 4 Antenna Kit comprises a whip antenna, with a bulkhead screw fixture and U.FL connector to attach to the socket on the module.

Antenna Kit and Compute Module 4
Antenna Kit and Compute Module 4

When using ether the Antenna Kit or the on-board antenna, you can take advantage of our modular certification to reduce the conformance testing costs for your finished product. And remember, the Raspberry Pi Integrator Programme is there to help you get your Compute Module-based product to market.

Our most powerful Compute Module

This is our best Compute Module yet. It’s also our first product designed by Dominic Plunkett, who joined us almost exactly a year ago.

I sat down with Dominic last week to discuss Compute Module 4 in greater detail, and you can find the video of our conversation here. Dominic will also be sharing more technical detail in the blog tomorrow.

In the meantime, check out the Compute Module 4 page for the datasheet and other details, and start thinking about what you’ll build with Compute Module 4.

The post Raspberry Pi Compute Module 4 on sale now from $25 appeared first on Raspberry Pi.

Monitor your GitHub build with a Raspberry Pi pumpkin

GitHub’s Martin Woodward has created a spooky pumpkin that warns you about the thing programmers find scariest of all — broken builds. Here’s his guest post describing the project:

“When you are browsing code looking for open source projects, seeing a nice green passing build badge in the ReadMe file lets you know everything is working with the latest version of that project. As a programmer you really don’t want to accidentally commit bad code, which is why we often set up continuous integration builds that constantly check the latest code in our project.”

“I decided to create a 3D-printed pumpkin that would hold a Raspberry Pi Zero with an RGB LED pHat on top to show me the status of my build for Halloween. All the code is available on GitHub alongside the 3D printing models which are also available on Thingiverse.”

Components

  • Raspberry Pi Zero (I went for the WH version to save me soldering on the header pins)
  • Unicorn pHat from Pimoroni
  • Panel mount micro-USB extension
  • M2.5 hardware for mounting (screws, male PCB standoffs, and threaded inserts)

“For the 3D prints, I used a glow-in-the-dark PLA filament for the main body and Pi holder, along with a dark green PLA filament for the top plug.”

“I’ve been using M2.5 threaded inserts quite a bit when printing parts to fit a Raspberry Pi, as it allows you to simply design a small hole in your model and then you push the brass thread into the gap with your soldering iron to melt it securely into place ready for screwing in your device.”

Threaded insert

“Once the inserts are in, you can screw the Raspberry Pi Zero into place using some brass PCB stand-offs, place the Unicorn pHAT onto the GPIO ports, and then screw that down.”

pHAT install

“Then you screw in the panel-mounted USB extension into the back of the pumpkin, connect it to the Raspberry Pi, and snap the Raspberry Pi holder into place in the bottom of your pumpkin.”

Inserting the base

Code along with Martin

“Now you are ready to install the software.  You can get the latest version from my PumpkinPi project on GitHub. “

“Format the micro SD Card and install Raspberry Pi OS Lite. Rather than plugging in a keyboard and monitor, you probably want to do a headless install, configuring SSH and WiFi by dropping an ssh file and a wpa_supplicant.conf file onto the root of the SD card after copying over the Raspbian files.”

“You’ll need to install the Unicorn HAT software, but they have a cool one-line installer that takes care of all the dependencies including Python and Git.”

\curl -sS https://get.pimoroni.com/unicornhat | bash

“In addition, we’ll be using the requests module in Python which you can install with the following command:”

sudo pip install requests

“Next you want to clone the git repo.”

git clone https://github.com/martinwoodward/PumpkinPi.git

“You then need to modify the settings to point at your build badge. First of all copy the sample settings provided in the repo:”

cp ~/PumpkinPi/src/local_settings.sample ~/PumpkinPi/src/local_settings.py

“Then edit the BADGE_LINK variable and point at the URL of your build badge.”

# Build Badge for the build you want to monitor

BADGE_LINK = "https://github.com/martinwoodward/calculator/workflows/CI/badge.svg?branch=main"

# How often to check (in seconds). Remember - be nice to the server. Once every 5 minutes is plenty.

REFRESH_INTERVAL = 300

“Finally you can run the script as root:”

sudo python ~/PumpkinPi/src/pumpkinpi.py &

“Once you are happy everything is running how you want, don’t forget you can run the script at boot time. The easiest way to do this is to use crontab. See this cool video from Estefannie to learn more. But basically you do sudo crontab -e then add the following:”

@reboot /bin/sleep 10 ; /usr/bin/python /home/pi/PumpkinPi/src/pumpkinpi.py &

“Note that we are pausing for 10 seconds before running the Python script. This is to allow the WiFi network to connect before we check on the state of our build.”

“The current version of the pumpkinpi script works with all the SVG files produced by the major hosted build providers, including GitHub Actions, which is free for open source projects. But if you want to improve the code in any way, I’m definitely accepting pull requests on it.”

“Using the same hardware you could monitor lots of different things, such as when someone posts on Twitter, what the weather will be tomorrow, or maybe just code your own unique multi-coloured display that you can leave flickering in your window.”

“If you build this project or create your own pumpkin display, I’d love to see pictures. You can find me on Twitter @martinwoodward and on GitHub.”

The post Monitor your GitHub build with a Raspberry Pi pumpkin appeared first on Raspberry Pi.

Join the UK Bebras Challenge 2020 for schools!

Par : Dan Fisher

The annual UK Bebras Computational Thinking Challenge for schools, brought to you by the Raspberry Pi Foundation and Oxford University, is taking place this November!

UK Bebras Challenge logo

The Bebras Challenge is a great way for your students to practise their computational thinking skills while solving exciting, accessible, and puzzling questions. Usually this 40-minute challenge would take place in the classroom. However, this year for the first time, your students can participate from home too!

If your students haven’t entered before, now is a great opportunity for them to get involved: they don’t need any prior knowledge. 

Do you have any students who are up for tackling the Bebras Challenge? Then register your school today!

School pupils in a computing classroom

What you need to know about the Bebras Challenge

  • It’s a great whole-school activity open to students aged 6 to 18, in different age group categories.
  • It’s completely free!
  • The closing date for registering your school is 30 October.
  • Let your students complete the challenge between 2 and 13 November 2020.
  • The challenge is made of a set of short tasks, and completing it takes 40 minutes.
  • The challenge tasks focus on logical thinking and do not require any prior knowledge of computer science.
  • There are practice questions to help your students prepare for the challenge.
  • This year, students can take part at home (please note they must still be entered through their school).
  • All the marking is done for you! The results will be sent to you the week after the challenge ends, along with the answers, so that you can go through them with your students.

“Thank you for another super challenge. It’s one of the highlights of my year as a teacher. Really, really appreciate the high-quality materials, website, challenge, and communication. Thank you again!”

– A UK-based teacher

Support your students to develop their computational thinking skills with Bebras materials

Bebras is an international challenge that started in Lithuania in 2004 and has grown into an international event. The UK became involved in Bebras for the first time in 2013, and the number of participating students has increased from 21,000 in the first year to more than 260,000 last year! Internationally, nearly 3 million learners took part in 2019. 

Bebras is a great way to engage your students of all ages in problem-solving and give them a taste of what computing is all about. In the challenge results, computing principles are highlighted, so Bebras can be educational for you as a teacher too.

The annual Bebras Challenge is only one part of the equation: questions from previous years are available as a resource that you can use to create self-marking quizzes for your classes. You can use these materials throughout the year to help you to deliver the computational thinking part of your curriculum!

The post Join the UK Bebras Challenge 2020 for schools! appeared first on Raspberry Pi.

Raspberry Pi High Quality security camera

DJ from the element14 community shows you how to build a red-lensed security camera in the style of Portal 2 using the Raspberry Pi High Quality Camera.

The finished camera mounted on the wall

Portal 2 is a puzzle platform game developed by Valve — a “puzzle game masquerading as a first-person shooter”, according to Forbes.

DJ playing with the Raspberry Pi High Quality Camera

Kit list

No code needed!

DJ was pleased to learn that you don’t need to write any code to make your own security camera, you can just use a package called motionEyeOS. All you have to do is download the motionEyeOS image, pop the flashed SD card into your Raspberry Pi, and you’re pretty much good to go.

Dj got everything set up on a 5″ screen attached to the Raspberry Pi

You’ll find that the default resolution is 640×480, so it will show up as a tiny window on your monitor of choice, but that can be amended.

Simplicity

While this build is very simple electronically, the 20-part 3D-printed shell is beautiful. A Raspberry Pi is positioned on a purpose-built platform in the middle of the shell, connected to the Raspberry Pi High Quality Camera, which sits at the front of that shell, peeking out.

All the 3D printed parts ready to assemble

The 5V power supply is routed through the main shell into the base, which mounts the build to the wall. In order to keep the Raspberry Pi cool, DJ made some vent holes in the lens of the shell. The red LED is routed out of the side and sits on the outside body of the shell.

Magnetising

Raspberry Pi 4 (centre) and Raspberry Pi High Quality Camera (right) sat inside the 3D printed shell

This build is also screwless: the halves of the shell have what look like screw holes along the edges, but they are actually 3mm neodymium magnets, so assembly and repair is super easy as everything just pops on and off.

The final picture (that’s DJ!)

You can find all the files you need to recreate this build, or you can ask DJ a question, at element14.com/presents.

The post Raspberry Pi High Quality security camera appeared first on Raspberry Pi.

AI-Man: a handy guide to video game artificial intelligence

Discover how non-player characters make decisions by tinkering with this Unity-based Pac-Man homage. Paul Roberts wrote this for the latest issue of Wireframe magazine.

From the first video game to the present, artificial intelligence has been a vital part of the medium. While most early games had enemies that simply walked left and right, like the Goombas in Super Mario Bros., there were also games like Pac-Man, where each ghost appeared to move intelligently. But from a programming perspective, how do we handle all the different possible states we want our characters to display?

Here’s AI-Man, our homage to a certain Namco maze game. You can switch between AI types to see how they affect the ghosts’ behaviours.

For example, how do we control whether a ghost is chasing Pac-Man, or running away, or even returning to their home? To explore these behaviours, we’ll be tinkering with AI-Man – a Pac-Man-style game developed in Unity. It will show you how the approaches discussed in this article are implemented, and there’s code available for you to modify and add to. You can freely download the AI-Man project here. One solution to managing the different states a character can be in, which has been used for decades, is a finite state machine, or FSM for short. It’s an approach that describes the high-level actions of an agent, and takes its name simply from the fact that there are a finite number of states from which to transition between, with each state only ever doing one thing.


Altered states

To explain what’s meant by high level, let’s take a closer look at the ghosts in Pac-Man. The highlevel state of a ghost is to ‘Chase’ Pac-Man, but the low level is how the ghost actually does this. In Pac-Man, each ghost has its own behaviour in which it hunts the player down, but they’re all in the same high-level state of ‘Chase’. Looking at Figure 1, you can see how the overall behaviour of a ghost can be depicted extremely easily, but there’s a lot of hidden complexity. At what point do we transition between states? What are the conditions on moving between states across the connecting lines? Once we have this information, the diagram can be turned into code with relative ease. You could use simple switch statements to achieve this, or we could achieve the same using an object-oriented approach.

Figure 1: A finite state machine

Using switch statements can quickly become cumbersome the more states we add, so I’ve used the object-oriented approach in the accompanying project, and an example code snippet can be seen in Code Listing 1. Each state handles whether it needs to transition into another state, and lets the state machine know. If a transition’s required, the Exit() function is called on the current state, before calling the Enter() function on the new state. This is done to ensure any setup or cleanup is done, after which the Update() function is called on whatever the current state is. The Update()function is where the low-level code for completing the state is processed. For a project as simple as Pac-Man, this only involves setting a different position for the ghost to move to.


Hidden complexity

Extending this approach, it’s reasonable for a state to call multiple states from within. This is called a hierarchical finite state machine, or HFSM for short. An example is an agent in Call of Duty: Strike Team being instructed to seek a stealthy position, so the high-level state is ‘Find Cover’, but within that, the agent needs to exit the dumpster he’s currently hiding in, find a safe location, calculate a safe path to that location, then repeatedly move between points on that path until he reaches the target position.

FSMs can appear somewhat predictable as the agent will always transition into the same state. This can be accommodated for by having multiple options that achieve the same goal. For example, when the ghosts in our Unity project are in the ‘Chase’ state, they can either move to the player, get in front of the player, or move to a position behind the player. There’s also an option to move to a random position. The FSM implemented has each ghost do one of these, whereas the behaviour tree allows all ghosts to switch between the options every ten seconds. A limitation of the FSM approach is that you can only ever be in a single state at a particular time. Imagine a tank battle game where multiple enemies can be engaged. Simply being in the ‘Retreat’ state doesn’t look smart if you’re about to run into the sights of another enemy. The worst-case scenario would be our tank transitions between ‘Attack’ and ‘Retreat’ states on each frame – an issue known as state thrashing – and gets stuck, and seemingly confused about what to do in this situation. What we need is away to be in multiple states at the same time: ideally retreating from tank A, whilst attacking tank B. This is where fuzzy finite state machines, or FFSM for short, come in useful.

This approach allows you to be in a particular state to a certain degree. For example, my tank could be 80% committed to the Retreat state (avoid tank A), and 20% committed to the Attack state (attack tank B). This allows us to both Retreat and Attack at the same time. To achieve this, on each update, your agent needs to check each possible state to determine its degree of commitment, and then call each of the active states’ updates. This differs from a standard FSM, where you can only ever be in a single state. FFSMs can be in none, one, two, or however many states you like at one time. This can prove tricky to balance, but it does offer an alternative to the standard approach.


No memory

Another potential issue with an FSM is that the agent has no memory of what they were previously doing. Granted, this may not be important: in the example given, the ghosts in Pac-Man don’t care about what they were doing, they only care about what they are doing, but in other games, memory can be extremely important. Imagine instructing a character to gather wood in a game like Age of Empires, and then the character gets into a fight. It would be extremely frustrating if the characters just stood around with nothing to do after the fight had concluded, and for the player to have to go back through all these characters and reinstruct them after the fight is over. It would be much better for the characters to return to their previous duties.

“FFSMs can be in one, none,

two, or however many states

you like.”

We can incorporate the idea of memory quite easily by using the stack data structure. The stack will hold AI states, with only the top-most element receiving the update. This in effect means that when a state is completed, it’s removed from the stack and the previous state is then processed. Figure 2 depicts how this was achieved in our Unity project. To differentiate the states from the FSM approach, I’ve called them tasks for the stackbased implementation. Looking at Figure 2, it shows how (from the bottom), the ghost was chasing the player, then the player collected a power pill, which resulted in the AI adding an Evade_Task – this now gets the update call, not the Chase_Task. While evading the player, the ghost was then eaten.

At this point, the ghost needed to return home, so the appropriate task was added. Once home, the ghost needed to exit this area, so again, the relevant task was added. At the point the ghost exited home, the ExitHome_Task was removed, which drops processing back to MoveToHome_Task. This was no longer required, so it was also removed. Back in the Evade_Task, if the power pill was still active, the ghost would return to avoiding the player, but if it had worn off, this task, in turn, got removed, putting the ghost back in its default task of Chase_Task, which will get the update calls until something else in the world changes.

Figure 2: Stack-based finite state machine.


Behaviour trees

In 2002, Halo 2 programmer Damian Isla expanded on the idea of HFSM in a way that made it more scalable and modular for the game’s AI. This became known as the behaviour tree approach. It’s now a staple in AI game development. The behaviour tree is made up of nodes, which can be one of three types – composite, decorator, or leaf nodes. Each has a different function within the tree and affects the flow through the tree. Figure 3 shows how this approach is set up for our Unity project. The states we’ve explored so far are called leaf nodes. Leaf nodes end a particular branch of the tree and don’t have child nodes – these are where the AI behaviours are located. For example, Leaf_ExitHome, Leaf_Evade, and Leaf_ MoveAheadOfPlayer all tell the ghost where to move to. Composite nodes can have multiple child nodes and are used to determine the order in which the children are called. This could be in the order in which they’re described by the tree, or by selection, where the children nodes will compete, with the parent node selecting which child node gets the go-ahead. Selector_Chase allows the ghost to select a single path down the tree by choosing a random option, whereas Sequence_ GoHome has to complete all the child paths to complete its behaviour.

Code Listing 2 shows how simple it is to choose a random behaviour to use – just be sure to store the index for the next update. Code Listing 3 demonstrates how to go through all child nodes, and to return SUCCESS only when all have completed, otherwise the status RUNNING is returned. FAILURE only gets returned when a child node itself returns a FAILURE status.


Complex behaviours

Although not used in our example project, behaviour trees can also have nodes called decorators. A decorator node can only have a single child, and can modify the result returned. For example, a decorator may iterate the child node for a set period, perhaps indefinitely, or even flip the result returned from being a success to a failure. From what first appears to be a collection of simple concepts, complex behaviours can then develop.

Figure 3: Behaviour tree

Video game AI is all about the illusion of intelligence. As long as the characters are believable in their context, the player should maintain their immersion in the game world and enjoy the experience we’ve made. Hopefully, the approaches introduced here highlight how even simple approaches can be used to develop complex characters. This is just the tip of the iceberg: AI development is a complex subject, but it’s also fun and rewarding to explore.

Wireframe #43, with the gorgeous Sea of Stars on the cover.

The latest issue of Wireframe Magazine is out now. available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the PDF directly from the Wireframe Magazine website.

The post AI-Man: a handy guide to video game artificial intelligence appeared first on Raspberry Pi.

Congratulations Carrie Anne Philbin, MBE

We are delighted to share the news that Carrie Anne Philbin, Raspberry Pi’s Director of Educator Support, has been awarded an MBE for her services to education in the Queen’s Birthday Honours 2020.

Carrie Anne Philbin MBE
Carrie Anne Philbin, newly minted MBE

Carrie Anne was one of the first employees of the Raspberry Pi Foundation and has helped shape our educational programmes over the past six years. Before joining the Foundation, Carrie Anne was a computing teacher, YouTuber, and author.

She’s also a tireless champion for diversity and inclusion in computing; she co-founded a grassroots movement of computing teachers dedicated to diversity and inclusion, and she has mentored young girls and students from disadvantaged backgrounds. She is a fantastic role model and source of inspiration to her colleagues, educators, and young people. 

From history student to computing teacher and YouTuber

As a young girl, Carrie Anne enjoyed arts and crafts and when her dad bought the family a Commodore 64, she loved the graphics she could make on it. She says, “I vividly remember typing in the BASIC commands to create a train that moved on the screen with my dad.” Being able to express her creativity through digital patterns sparked her interest in technology.

After studying history at university, Carrie Anne followed her passion for technology and became an ICT technician at a secondary school, where she also ran several extra-curricular computing clubs for the students. Her school encouraged and supported her to apply for the Graduate Teacher Programme, and she qualified within two years.

Carrie Anne admits that her first experience in a new school as a newly qualified teacher was “pretty terrifying”, and she says her passion for the subject and her sense of humour are what got her through. The students she taught in her classroom still inspire her today.

Showing that computing is for everyone

As well as co-founding CAS #include, a diversity working group for computing teachers, Carrie Anne started the successful YouTube channel Geek Gurl Diaries. Through video interviews with women working in tech and hands-on computer science tutorials, Carrie Anne demonstrates that computing is fun and that it’s great to be a girl who likes computers.

Carrie Anne Philbin MBE sitting at a disk with physical computing equipment

On the back of her own YouTube channel’s success, Carrie Anne was invited to host the Computer Science video series on Crash Course, the extremely popular educational YouTube channel created by Hank and John Green. There, her 40+ videos have received over 2 million views so far.

Discovering the Raspberry Pi Foundation

Carrie Anne says that the Raspberry Pi computer brought her to the Raspberry Pi Foundation, and that she stayed “because of the community and the Foundation’s mission“. She came across the Raspberry Pi while searching for new ways to engage her students in computing, and joined a long waiting list to get her hands on the single-board computer. After her Raspberry Pi finally arrived, she carried it in her handbag to community meetups to learn how other people were using it in education.

Carrie Anne Philbin
Carrie Anne with her book Adventures in Raspberry Pi

Since joining the Foundation, Carrie Anne has helped to build an incredible team, many of them also former computing teachers. Together they have trained thousands of educators and produced excellent resources that are used by teachers and learners around the world. Most recently, the team created the Teach Computing Curriculum of over 500 hours of free teaching resources for primary and secondary teachers; free online video lessons for students learning at home during the pandemic (in partnership with Oak National Academy); and Isaac Computer Science, a free online learning platform for A level teachers and students.

On what she wants to empower young people to do

Carrie Anne says, “We’re living in an ever-changing world that is facing many challenges right now: climate change, democracy and human rights, oh and a global pandemic. These are issues that young people care about. I’ve witnessed this year after year at our international Coolest Projects technology showcase event for young people, where passionate young creators present the tech solutions they are already building to address today’s and tomorrow’s problems. I believe that equipped with a deeper understanding of technology, young people can change the world for the better, in ways we’ve not even imagined.” 

Carrie Anne has already achieved a huge amount in her career, and we honestly believe that she is only just getting started. On behalf of all your colleagues at the Foundation and all the educators and young people whose lives you’ve changed, congratulations Carrie Anne! 

The post Congratulations Carrie Anne Philbin, MBE appeared first on Raspberry Pi.

Scroll text across your face mask with NeoPixel and Raspberry Pi

Have you perfected your particular combination of ‘eye widening then squinting’ to let people know you’re smiling at them behind your mask? Or do you need help expressing yourself from this text-scrolling creation by Caroline Dunn?

The mask running colourful sample code

What’s it made of?

The main bits of hardware need are a Raspberry Pi 3 or Raspberry Pi 4 or Raspberry Pi Zero W (or a Zero WH with pre-soldered GPIO header if you don’t want to do soldering yourself), and an 8×8 Flexible NeoPixel Matrix with individually addressable LEDs. The latter is a two-dimensional grid of NeoPixels, all controlled via a single microcontroller pin.

Raspberry Pi and the NeoPixel Matrix (bottom left) getting wired up

The NeoPixel Matrix is attached to a cloth face that which has a second translucent fabric layer. The translucent layer is to sew your Raspberry Pi project to, the cloth layer underneath is a barrier for germs.

You’ll need a separate 5V power source for the NeoPixel Matrix. Caroline used a 5V power bank, which involved some extra fiddling with cutting up and stripping an old USB cable. You may want to go for a purpose-made traditional power supply for ease.

Running the text

To prototype, Caroline connected the Raspberry Pi computer to the NeoPixel Matrix via a breadboard and some jumper wires. At this stage of your own build, you check everything is working by running this sample code from Adafruit, which should get your NeoPixel Matrix lighting up like a rainbow.

The internal website on the left

Once you’ve got your project up and running, you can ditch the breadboard and wires and set up the key script, app.py, to run on boot.

Going mobile

To change the text scrolling across your mask, you use the internal website that’s part of Caroline’s code.

And for a truly mobile solution, you can access the internal website via mobile phone by hooking up your Raspberry Pi using your phone’s hotspot functionality. Then you can alter the scrolling text while you’re out and about.

Caroline wearing the 32×8 version

Caroline also created a version of her project using a 32×8 Neopixel Matrix, which fits on the across the headband of larger plastic face visors.

If you want to make this build for yourself, you’d do well to start with the very nice in-depth walkthrough Caroline created. It’s only three parts; you’ll be fine.

The post Scroll text across your face mask with NeoPixel and Raspberry Pi appeared first on Raspberry Pi.

How teachers train in Computing with our free online courses

Since 2017 we’ve been training Computing educators in England and around the world through our suite of free online courses on FutureLearn. Thanks to support from Google and the National Centre for Computing Education (NCCE), all of these courses are free for anyone to take, whether you are a teacher or not!

An illustration of a bootcamp for computing teachers

We’re excited that Computer Science educators at all stages in their computing journey have embraced our courses — from teachers just moving into the field to experienced educators looking for a refresher so that they can better support their colleagues.

Hear from two teachers about their experience of training with our courses and how they are benefitting!

Moving from Languages to IT to Computing

Rebecca Connell started out as a Modern Foreign Languages teacher, but now she is Head of Computing at The Cowplain School, a 11–16 secondary school in Hampshire.

Computing teacher Rebecca Connell
Computing teacher Rebecca finds our courses “really useful in building confidence and taking [her] skills further”.

Although she had plenty of experience with Microsoft Office and was happy teaching IT, at first she was daunted by the technical nature of Computing:

“The biggest challenge for me has been the move away from an IT to a Computing curriculum. To say this has been a steep learning curve is an understatement!”

However, Rebecca has worked with our courses to improve her coding knowledge, especially in Python:

“Initially, I undertook some one-day programming courses in Python. Recently, I have found the Raspberry Pi courses to be really useful in building confidence and taking my skills further. So far, I have completed Programming 101 — great for revision and teaching ideas — and am now into Programming 102.”

GCSE Computing is more than just programming, and our courses are helping Rebecca develop the rest of her Computing knowledge too:

“I am now taking some online Raspberry Pi courses on computer systems and networks to firm up my knowledge — my greatest fear is saying something that’s not strictly accurate! These courses have some good ideas to help explain complex concepts to students.”

She also highly rates the new free Teach Computing Curriculum resources we have developed for the NCCE:

“I really like the new resources and supporting materials from Raspberry Pi — these have really helped me to look again at our curriculum. They are easy to follow and include everything you need to take students forward, including lesson plans.”

And Rebecca’s not the only one in her department who is benefitting from our courses and resources:

“Our department is supported by an excellent PE teacher who delivers lessons in Years 7, 8, and 9. She has enjoyed completing some of the Raspberry Pi courses to help her to deliver the new curriculum and is also enjoying her learning journey.”

Refreshing and sharing your knowledge

Julie Price, a CAS Master Teacher and NCCE Computer Science Champion, has been “engaging with the NCCE’s Computer Science Accelerator programme, [to] be in a better position to appreciate and help to resolve any issues raised by fellow participants.”

Computing teacher Julie Price
Computer science teacher Julie Price says she is “becoming addicted” to our online courses!

“I have encountered new learning for myself and also expressions of very familiar content which I have found to be seriously impressive and, in some cases, just amazing. I must say that I am becoming addicted to the Raspberry Pi Foundation’s online courses!”

She’s been appreciating the open nature of the courses, as we make all of the materials free to use under the Open Government Licence:

“Already I have made very good use of a wide range of the videos, animations, images, and ideas from the Foundation’s courses.”

Julie particularly recommends the Programming Pedagogy in Secondary Schools: Inspiring Computing Teaching course, describing it as “a ‘must’ for anyone wishing to strengthen their key stage 3 programming curriculum.”

Join in and train with us

Rebecca and Julie are just 2 of more than 140,000 active participants we have had on our online courses so far!

With 29 courses to choose from (and more on the way!), from Introduction to Web Development to Robotics with Raspberry Pi, we have something for everyone — whether you’re a complete beginner or an experienced computer science teacher. All of our courses are free to take, so find one that inspires you, and let us support you on your computing journey, along with Google and the NCCE.

If you’re a teacher in England, you are eligible for free course certification from FutureLearn via the NCCE.

The post How teachers train in Computing with our free online courses appeared first on Raspberry Pi.

Haunted House hacks

Spookify your home in time for Halloween with Rob Zwetsloot and these terror-ific projects!

We picked four of our favourites from a much longer feature in the latest issue of The MagPi magazine, so make sure you check it out if you need more Haunted House hacks in your life.

Raspberry Pi Haunted House

This project is a bit of a mixture of indoors and outdoors, with a doorbell on the house activating a series of spooky effects like a creaking door, ‘malfunctioning’ porch lights, and finally a big old monster mash in the garage.

A Halloween themed doorbell

MagPi magazine talked to its creator Stewart Watkiss about it a few years ago and he revealed how he used a PiFace HAT to interface with home automation techniques to create the scary show, although it can be made much easier these days thanks to Energenie. Our favourite part, though, is still the Home Alone-esque monster party that caps it off.

Check it our for yourself here.

Eye of Sauron

It’s a very nice-looking build as well

The dreaded dark lord Sauron from Lord of the Rings watched over Middle-earth in the form of a giant flaming eye atop his black tower, Barad-dûr. Mike Christian’s version sits on top of a shed in Saratoga, CA.

The eye of sauron on top of a barn lit in red lights
Atop the shed with some extra light effects, it looks very scary

It makes use of the Snake Eyes Bonnet from Adafruit, with some code modifications and projecting onto a bigger eye. Throw in some cool lights and copper wires and you get a nice little effect, much like that from the films.

There are loads more cool photos on Mike’s original project page.

Raspberry Pi-powered Jack-o-Lantern

We love the eyes and scary sounds in this version that seem to follow you around

A classic indoor Halloween decoration (and outdoor, according to American movies) is the humble Jack-o’-lantern. While you could carve your own for this kind of project (and we’ve seen many people do so), this version uses a pre-cut, 3D-printed pumpkin.

3D printed pumpkin glowing orange
The original 3D print lit with a single source is still fairly scary

If you want to put one outside as well, we highly recommend you add some waterproofing or put it under a porch of some kind, especially if you live in the UK.

Here’s a video about the project by the maker.

Scary door

You’re unlikely to trick someone already in your house with a random door that has appeared out of nowhere, but while they’re investigating they’ll get the scare of their life. This door was created as a ‘sequel’ to a Scary Porch, and has a big monitor where a window might be in the door. There’s also an array of air-pistons just behind the door to make it sound like someone is trying to get out.

There are various videos that can play on the door screen, and they’re randomised so any viewers won’t know what to expect. This one also uses relays, so be careful.

This project is the brainchild of the element14 community and you can read more about how it was made here.


The MagPi magazine is out now, available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the PDF directly from the MagPi magazine website.

The post Haunted House hacks appeared first on Raspberry Pi.

Build an e-paper to-do list with Raspberry Pi

James Bruton (or @xrobotosuk on Instagram) built an IoT-controlled e-paper message board using Raspberry Pi. Updating it is easy: just edit a Google sheet, and the message board will update with the new data.

Harnessing Google power

This smart message board uses e-paper, which has very low power consumption. Combining this with the Google Docs API (which allows you to write code to read and write to Google Docs) and Raspberry Pi makes it possible to build a message board that polls a Google Sheet and updates whenever there’s new data. This guide helped James write the Google Docs API code.

We’ll do #4 for you, James!

Why e-paper?

James’s original plan was to hook up his Raspberry Pi to a standard monitor and use Google Docs so people could update the display via mobile app. However, a standard monitor consumes a lot of power, due to its backlight, and if you set it to go into sleep mode, people would just walk past it and not see updates to the list unless they remember to wake the device up.

Raspberry Pi wearing its blue e-paper HAT on the left, which connects to the display on the right via a ribbon cable

Enter e-paper (the same stuff used for Kindle devices), which only consumes power when it’s updating. Once you’ve got the info you want on the e-paper, you can even disconnect it entirely from your power source and the screen will still display whatever the least update told it to. James’s top tip for your project: go for the smallest e-paper display possible, as those things are expensive. He went with this one, which comes with a HAT for Raspberry Pi and a ribbon cable to connect the two.

The display disconnected from any power and still clearly readable

The HAT has an adaptor for plugging into the Raspberry Pi GPIO pins, and a breakout header for the SPI pins. James found it’s not as simple as enabling the SPI on his Raspberry Pi and the e-paper display springing to life: you need a bit of code to enable the SPI display to act as the main display for the Raspberry Pi. Luckily, the code for this is on the wiki of Waveshare, the producer of HAT and display James used for this project.

Making it pretty

A 3D-printed case, which looks like a classic photo frame but with a hefty in-built stand to hold it up and provide enough space for the Raspberry Pi to sit on, is home to James’s finished smart to-do list. The e-paper is so light and thin it can just be sticky-taped into the frame.

The roomy frame stand

James’s creation is powered by Raspberry Pi 4, but you don’t need that much power, and he’s convinced you’ll be fine with any Raspberry Pi model that has 40 GPIO pins.

Extra points for this maker, as he’s put all the CAD files and code you’ll need to make your own e-paper message board on GitHub.

If you’re into e-paper stuff but are wedded to your handwritten to-do lists, then why not try building this super slow movie player instead? The blog squad went *nuts* for it when we posted it last month.

The post Build an e-paper to-do list with Raspberry Pi appeared first on Raspberry Pi.

Raspberry Pi robot prompts proper handwashing

Amol Deshmukh from the University of Glasgow got in touch with us about a social robot designed to influence young people’s handwashing behaviour, which the design team piloted in a rural school in Kerala, India.

In the pilot study, the hand-shaped Pepe robot motivated a 40% increase in the quality and levels of handwashing. It was designed by AMMACHI Labs and University of Glasgow researchers, with a Raspberry Pi serving as its brain and powering the screens that make up its mouth and eyes.

How does Pepe do it?

The robot is very easy to attach to the wall next to a handwashing station and automatically detects approaching people. Using AI software, it encourages, monitors, and gives verbal feedback to children on their handwashing, all in a fun and engaging way.

Little boy is thinking: “What the…” then “OK, surrrrre”


Amol thinks the success of the robot was due to its eye movements, as people change their behaviour when they know they are being observed. A screen displaying a graphical mouth also meant the robot could show it was happy when the children washed their hands correctly; positive feedback such as this promotes learning new skills.

Socialising with Pepe keeps children at the sinks for longer


Amol’s team started work on this idea last year, and they were keen to test the Pepe robot with a group of people who had never been exposed to social robots before. They presented their smiling hand-face hybrid creation at the IEEE International Conference on Robot & Human Interactive Communication (see photo below). And now that hand washing has become more important than ever due to coronavirus, the project is getting mainstream media attention as well.

Photo borrowed from the official conference gallery

What’s next?

The team is now planning to improve Pepe’s autonomous intelligence and scale up the intervention across more schools through the Embracing the World network.

Pepe had a promising trial run, as shown by these stats from the University of Glasgow’s story on the pilot study:

  • More than 90% of the students liked the robot and said they would like to see Pepe again after school vacation.
  • 67% of the respondents thought the robot was male, while 33% thought it was female, mostly attributing to the robot’s voice as the reason
  • 60% said it was younger than them, feeling Pepe was like a younger brother or sister, while 33% thought it was older, and 7% perceived the robot to be of the same age
  • 72% of the students thought Pepe was alive, largely due to its ability to talk

The post Raspberry Pi robot prompts proper handwashing appeared first on Raspberry Pi.

Ultrasonically detect bats with Raspberry Pi

Welcome to October, the month in which spiderwebs become decor and anything vaguely gruesome is considered ‘seasonal’. Such as bats. Bats are in fact cute, furry creatures, but as they are part of the ‘Halloweeny animal’ canon, I have a perfect excuse to sing their praises.

baby bats in a row wrapped up like human babies
SEE? Baby bats wrapped up cute like baby humans

Tegwyn Twmffat was tasked with doing a bat survey on a derelict building, and they took to DesignSpark to share their Raspberry Pi–powered solution.

UK law protects nesting birds and roosting bats, so before you go knocking buildings down, you need a professional to check that no critters will be harmed in the process.

The acoustic signature of an echo-locating brown long-eared bat

The problem with bats, compared to birds, is they are much harder to spot and have a tendency to hang out in tiny wall cavities. Enter this big ultrasonic microphone.

Raspberry Pi 4 Model B provided the RAM needed for this build

After the building was declared safely empty of bats, Tegwyn decided to keep hold of the expensive microphone (the metal tube in the image above) and have a crack at developing their own auto-classification system to detect which type of bats are about.

How does it work?

The ultrasonic mic picks up the audio data using an STM M0 processor and streams it to Raspberry Pi via USB. Raspberry Pi runs Alsa driver software and uses the bash language to receive the data.

Tegwyn turned to the open-source GTK software to process the audio data

It turns out there are no publicly available audio records of bats, so Tegwyn took to their own back garden and found 6 species to record. And with the help of a few other bat enthusiasts, they cobbled together an audio dataset of 9 of the 17 bat species found in the UK!

More baby bats

Tegwyn’s original post about their project features a 12-step walkthrough, as well as all the code and commands you’ll need to build your own system. And here’s the GitHub repository, where you can check for updates.

The post Ultrasonically detect bats with Raspberry Pi appeared first on Raspberry Pi.

Code a Rally-X-style mini-map | Wireframe #43

Race around using a mini-map for navigation, just like the arcade classic, Rally-X. Mark Vanstone has the code

In Namco’s original arcade game, the red cars chased the player relentlessly around each level. Note the handy mini-map on the right.

The original Rally-X arcade game blasted onto the market in 1980, at the same time as Pac‑Man and Defender. This was the first year that developer Namco had exported its games outside Japan thanks to the deal it struck with Midway, an American game distributor. The aim of Rally-X is to race a car around a maze, avoiding enemy cars while collecting yellow flags – all before your fuel runs out.

The aspect of Rally-X that we’ll cover here is the mini-map. As the car moves around the maze, its position can be seen relative to the flags on the right of the screen. The main view of the maze only shows a section of the whole map, and scrolls as the car moves, whereas the mini-map shows the whole size of the map but without any of the maze walls – just dots where the car and flags are (and in the original, the enemy cars). In our example, the mini-map is five times smaller than the main map, so it’s easy to work out the calculation to translate large map co‑ordinates to mini-map co-ordinates.

To set up our Rally-X homage in Pygame Zero, we can stick with the default screen size of 800×600. If we use 200 pixels for the side panel, that leaves us with a 600×600 play area. Our player’s car will be drawn in the centre of this area at the co-ordinates 300,300. We can use the in-built rotation of the Actor object by setting the angle property of the car. The maze scrolls depending on which direction the car is pointing, and this can be done by having a lookup table in the form of a dictionary list (directionMap) where we define x and y increments for each angle the car can travel. When the cursor keys are pressed, the car stays central and the map moves.

A screenshot of our Rally-X homage running in Pygame Zero

Roam the maze and collect those flags in our Python homage to Rally-X.

To detect the car hitting a wall, we can use a collision map. This isn’t a particularly memory-efficient way of doing it, but it’s easy to code. We just use a bitmap the same size as the main map which has all the roads as black and all the walls as white. With this map, we can detect if there’s a wall in the direction in which the car’s moving by testing the pixels directly in front of it. If a wall is detected, we rotate the car rather than moving it. If we draw the side panel after the main map, we’ll then be able to see the full layout of the screen with the map scrolling as the car navigates through the maze.

We can add flags as a list of Actor objects. We could make these random, but for the sake of simplicity, our sample code has them defined in a list of x and y co-ordinates. We need to move the flags with the map, so in each update(), we loop through the list and add the same increments to the x and y co‑ordinates as the main map. If the car collides with any flags, we just take them off the list of items to draw by adding a collected variable. Having put all of this in place, we can draw the mini-map, which will show the car and the flags. All we need to do is divide the object co-ordinates by five and add an x and y offset so that the objects appear in the right place on the mini-map.

And those are the basics of Rally-X! All it needs now is a fuel gauge, some enemy cars, and obstacles – but we’ll leave those for you to sort out…

Here’s Mark’s code for a Rally-X-style driving game with mini-map. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 43

You can read more features like this one in Wireframe issue 43, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 43 for free in PDF format.

Wireframe #43, with the gorgeous Sea of Stars on the cover.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

 

 

 

The post Code a Rally-X-style mini-map | Wireframe #43 appeared first on Raspberry Pi.

Raspberry Pi reaches more schools in rural Togo

We’ve been following the work of Dominique Laloux since he first got in touch with us in May 2013 ahead of leaving to spend a year in Togo. 75% of teachers in the region where he would be working had never used a computer before 2012, so he saw an opportunity to introduce Raspberry Pi and get some training set up.

We were so pleased to receive another update this year about Dominique and his Togolese team’s work. This has grown to become INITIC, a non-profit organisation that works to install low cost, low power consumption, low maintenance computer rooms in rural schools in Togo. The idea for the acronym came from the organisation’s focus on the INItiation of young people to ICT (TIC in French).

Visit the INTIC website to learn more

The story so far

INITIC’s first computer room was installed in Tokpli, Togo, way back in 2012. It was a small room (see the photo on the left below) donated by an agricultural association and renovated by a team of villagers.

  • The first INTIC room
  • The new INTIC building

Fast forward to 2018, and INTIC had secured its own building (photo on the right above). It has a dedicated a Raspberry Pi Room, as well as a multipurpose room and another small technical room. Young people from local schools, as well as those in neighbouring villages, have access to the facilities.

The first ever dedicated Raspberry Pi room at K. Adamé

The first dedicated Raspberry Pi Room in Togo was at the Collège (secondary school) in the town of Kuma Adamé. It was equipped with 21 first-generation Raspberry Pis, which stood up impressively against humid and dusty conditions.

Kpodzi High School’s Raspberry Pi Room

In 2019, Kpodzi High School also got its own Raspberry Pi Room, equipped with 22 Raspberry Pi workstations. Once the projector, laser printer, and scanners are in place, the space will also be used for electronics, Arduino, and programming workshops.

What’s the latest?

Ready for the unveiling…

Now we find ourselves in 2020 and INTIC is still growing. Young people in the bountiful, but inaccessible, village of Danyi Dzogbégan now have access to 20 Raspberry Pi workstations (plus one for the teacher). They have been using them for learning since January this year.

The first Raspberry Pi sessions in Danyi Dzogbégan

We can’t wait to see what Dominique and his team have up their sleeve next. You can help INTIC reach more young people in rural Togo by donating computer equipment, by helping teachers get lesson materials together, or through a volunteer stay at one of their facilities. Find out more here.

The post Raspberry Pi reaches more schools in rural Togo appeared first on Raspberry Pi.

“Tinkering is an equity issue” | Hello World #14

In the brand-new issue of Hello World magazine, Shuchi Grover tells us about the limits of constructionism, the value of formative assessment, and why programming can be a source of both joy and angst.

  • Cover of Hello World issue 14
  • Shuchi Grover

How much open-ended exploration should there be in computing lessons?

This is a question at the heart of computer science education and one which Shuchi Grover is delicately diplomatic about in the preface to her new book, Computer Science in K-12: An A-to-Z Handbook on Teaching Programming. The book’s chapters are written by 40 teachers and researchers in computing pedagogy, and Grover openly acknowledges the varying views around discovery-based learning among her diverse range of international authors.

“I wonder if I want to wade there,” she laughs. “The act of creating a program is in itself an act of creation. So there is hands-on learning quite naturally in the computer science classroom, and mistakes are made quite naturally. There are some things that are so great about computer science education. It lends itself so easily to being hands-on and to celebrating mistakes; debugging is par for the course, and that’s not the way it is in other subjects. The kids can actually develop some very nice mindsets that they can take to other classrooms.”

Shuchi Grover showing children something on a laptop screen

Grover is a software engineer by training, turned researcher in computer science education. She holds a PhD in learning sciences and technology design from Stanford University, where she remains a visiting scholar. She explains how the beginning of her research career coincided with the advent of the block-based programming language Scratch, now widely used as an introductory programming language for children.

“Almost two decades ago, I went to Harvard to study for a master’s called technology innovation and education, and it was around that time that I volunteered for robotics workshops at the MIT Media Lab and MIT Museum. Those were pretty transformative for me: I started after-school clubs and facilitated robotics and digital storytelling clubs. In the early 2000s, I was an educational technology consultant, working with teachers on integrating technology. Then Scratch came out, and I started working with teachers on integrating Scratch into languages, arts, and science, all the things that we are doing today.”

A girl with her Scratch project
Student Joyce codes in Scratch at her Code Club in Nunavut

Do her formative experiences at MIT, the birthplace of constructionist theory of student-centred, discovery-based learning, lead her to lean one way or another in the tinkering versus direct instruction debate? “The learning in informal spaces is, of course, very interest-driven. There is no measurement. Children are invited to a space to spend some time after school and do whatever they feel like. There would be kids who would be chatting away while a couple of them designed a robot, and then they would hand over the robot to some others and say, ‘OK, now you go ahead and program it,’ and there were some kids who would just like to hang about.

“When it comes to formal education, there needs to be more accountability, you want to do right by every child. You have to be more intentional. I do feel that while tinkering and constructionism was a great way to introduce interest-driven projects for informal learning, and there’s a lot to learn from there and bring to the formal learning context, I don’t think it can only be tinkering.”

“There needs to be more accountability to do right by every child.”

“Everybody knows that engagement is very important for learning — and this is something that we are learning more about: it’s not just interest, it’s also culture, communities, and backgrounds — but all of this is to say that there is a personal element to the learning process and so engagement is necessary, but it’s not a sufficient condition. You have to go beyond engagement, to also make sure that they are also engaging with the concepts. You want at some point for students to engage with the concept in a way that reveals what their misconceptions might be, and then they end up learning and understanding these things more deeply.

“You want a robust foundation — after all, our goal for teaching children anything at school is to build a foundation on which they build their college education and career and anything beyond that. If we take programming as a skill, you want them to have a good understanding of it, and so the personal connections are important, but so is the scaffolding.

“How much scaffolding needs to be done varies from context to context. Even in the same classroom, children may need different levels of scaffolding. It’s a sweet spot; within a classroom a teacher has to juggle so much. And therein lies the challenge of teaching: 30 kids at a time, and every child is different and every child is unique.

“It’s an equity issue. Some children don’t have the prior experience that sets them up to tinker constructively. After all, tinkering is meant to be purposeful exploration. And so it becomes an issue of who are you privileging with the pedagogy.”

She points out that each chapter in her book that comes from a more constructionist viewpoint clearly speaks of the need for scaffolding. And conversely, the chapters that take a more structured approach to computing education include elements of student engagement and children creating their own programs. “Frameworks such as Use-Modify-Create and PRIMM just push that open-ended creation a little farther down, making sure that the initial experiences have more guide rails.”

Approaches to assessment

Grover is a senior research scientist at Looking Glass Ventures, which in 2018 received a National Science Foundation grant to create Edfinity, a tool to enable affordable access to high-quality assessments for schools and universities.

In her book, she argues that asking students to write programs as a means of formative assessment has several pitfalls. It is time-consuming for both students and teachers, scoring is subjective, and it’s difficult to get a picture of how much understanding a student has of their code. Did they get their program to work through trial and error? Did they lift code from another student?

“Formative assessments that give quick feedback are much better. They focus on aspects of the conceptual learning that you want children to have. Multiple-choice questions on code force both the teachers and the children to experience code reading and code comprehension, which are just so important. Just giving children a snippet of code and saying: ‘What does this do? What will be the value of the variable? How many times will this be executed?’ — it goes down to the idea of code tracing and program comprehension.

“Research has also shown that anything you do in a classroom, the children take as a signal. Going back to the constructionist thing, when you foreground personal interest, there’s a different kind of environment in the classroom, where they’re able to have a voice, they have agency. That’s one of the good things about constructionism.

“Formative assessment signals to the student what it is that you’re valuing in the learning process. They don’t always understand what it is that they’re expected to learn in programming. Is the goal creating a program that runs? Or is it something else? And so when you administer these little check-ins, they bring more alignment between a teacher’s goals for the learners and the learners’ understanding of those goals. That alignment is important and it can get lost.”

Grover will present her latest research into assessment at our research seminar series next Tuesday 6 October — sign up to attend and join the discussion.

The joy and angst of programming

The title of Grover’s book, which could be thought to imply that computer science education consists solely of teaching students to program, may cause some raised eyebrows.

What about building robots or devices that interact with the world, computing topics like binary, or the societal impacts of technology? “I completely agree with the statement and the belief that computer science is not just about programming. I myself have been a proponent of this. But in this book I wanted to focus on programming for a couple of reasons. Programming is a central part of the computer science curriculum, at least here in the US, and it is also the part that teachers struggle with the most.

“I want to show where children struggle and how to help them.”

“As topics go, programming carries a lot of joy and angst. There is joy in computing, joy when you get it. But when a teacher is encountering this topic for the first time there is a lot of angst, because they themselves may not be understanding things, and they don’t know what it is that the children are not understanding. And there is this entire body of research on novice programming. There are the concepts, the practices, the pedagogies, and the issues of assessment. So I wanted to give the teachers all of that: everything we know about children and programming, the topics to be learnt, where they struggle, how to help them.”

Computer Science in K-12: An A-to-Z Handbook on Teaching Programming (reviewed in this issue of Hello World) is edited by Shuchi Grover and available now.

Hear more from Shuchi Grover, and subscribe to Hello World

We will host Grover at our next research seminar, Tuesday 6 October at 17:00–18:30 BST, where she will present her work on formative assessment.

Hello World is our magazine about all things computing education. It is free to download in PDF format, or you can subscribe and we will send you each new issue straight to your home.

In issue 14 of Hello World, we have gathered some inspiring stories to help your learners connect with nature. From counting penguins in Antarctica to orienteering with a GPS twist, great things can happen when young people get creative with technology outdoors. You’ll find all this and more in the new issue!

Educators based in the UK can subscribe to receive print copies for free!

The post “Tinkering is an equity issue” | Hello World #14 appeared first on Raspberry Pi.

Raspberry Pi High Quality Camera takes photos through thousands of straws

Adrian Hanft is our favourite kind of maker: weird. He’s also the guy who invented the Lego camera, 16 years ago. This time, he spent more than a year creating what he describes as “one of the strangest cameras you may ever hear about.”

What? Looks normal from here. Massive, but normal

What’s with all the straws?

OK, here’s why it’s weird: it takes photos with a Raspberry Pi High Quality Camera through a ‘lens’ of tiny drinking straws packed together. 23,248 straws, to be exact, are inside the wooden box-shaped bit of the machine above. The camera itself sits at the slim end of the black and white part. The Raspberry Pi, power bank, and controller all sit on top of the wooden box full of straws.

Here’s what an image of Yoda looks like, photographed through that many straws:

Mosaic, but make it techy

Ground glass lenses

The concept isn’t as easy as it may look. As you can see from the images below, if you hold up a load of straws, you can only see the light through a few of them. Adrian turned to older technology for a solution, taking a viewfinder from an old camera which had ground glass (which ‘collects’ light) on the surface.

Left: looking through straws at light with the naked eye
Right: the same straws viewed through a ground glass lens

Even though Adrian was completely new to both Raspberry Pi and Python, it only took him a week of evenings and weekends to code the software needed to control the Raspberry Pi High Quality Camera.

Long story short, on the left is the final camera, with all the prototypes queued up behind it

An original Nintendo controller runs the show and connects to the Raspberry Pi with a USB adapter. The buttons are mapped to the functions of Adrian’s software.

A super satisfying time-lapse of the straws being loaded

What does the Nintendo controller do?

In his original post, Adrian explains what all the buttons on the controller do in order to create images:

“The Start button launches a preview of what the camera is seeing. The A button takes a picture. The Up and Down buttons increase or decrease the exposure time by 1 second. The Select button launches a gallery of photos so I can see the last photo I took. The Right and Left buttons cycle between photos in the gallery. I am saving the B button for something else in the future. Maybe I will use it for uploading to Dropbox, I haven’t decided yet.”

Adrian made a Lego mount for the Raspberry Pi camera
The Lego mount makes it easy to switch between cameras and lenses

A mobile phone serves as a wireless display so he can keep an eye on what’s going on. The phone communicates with the Raspberry Pi connected to the camera via a VPN app.

One of the prototypes in action

Follow Adrian on Instagram to keep up with all the photography captured using the final camera, as well as the prototypes that came before it.

The post Raspberry Pi High Quality Camera takes photos through thousands of straws appeared first on Raspberry Pi.

13 Raspberry Pis slosh-test space shuttle tanks in zero gravity

High-school student Eleanor Sigrest successfully crowdfunded her way onto a zero-G flight to test her latest Raspberry Pi-powered project. NASA Goddard engineers peer reviewed Eleanor’s experimental design, which detects unwanted movement (or ‘slosh’) in spacecraft fluid tanks.

The Raspberry Pi-packed setup

The apparatus features an accelerometer to precisely determine the moment of zero gravity, along with 13 Raspberry Pis and 12 Raspberry Pi cameras to capture the slosh movement.

What’s wrong with slosh?

The Broadcom Foundation shared a pretty interesting minute-by-minute report on Eleanor’s first hyperbolic flight and how she got everything working. But, in a nutshell…

The full apparatus onboard the zero gravity flight

You don’t want the fluid in your space shuttle tanks sloshing around too much. It’s a mission-ending problem. Slosh occurs on take-off and also in microgravity during manoeuvres, so Eleanor devised this novel approach to managing it in place of the costly, heavy subsystems currently used on board space craft.

Eleanor wanted to prove that the fluid inside tanks treated with superhydrophobic and superhydrophilic coatings settled quicker than in uncoated tanks. And she was right: settling times were reduced by 73% in some cases.

Eleanor at work

A continuation of this experiment is due to go up on Blue Origin’s New Shepard rocket – and yes, a patent is already pending.

Curiosity, courage & compromise

At just 13 years old, Eleanor won the Samueli Prize at the 2016 Broadcom MASTERS for her mastery of STEM principles and team leadership during a rigorous week-long competition. High praise came from Paula Golden, President of Broadcom Foundation, who said: “Eleanor is the epitome of a young woman scientist and engineer. She combines insatiable curiosity with courage: two traits that are essential for a leader in these fields.”

Eleanor aged 13 with her award-winning project ‘Rockets & Nozzles & Thrust… Oh My’

That week-long experience also included a Raspberry Pi Challenge, and Eleanor explained: “During the Raspberry Pi Challenge, I learned that sometimes the simplest solutions are the best. I also learned it’s important to try everyone’s ideas because you never know which one might work the best. Sometimes it’s a compromise of different ideas, or a compromise between complicated and simple. The most important thing is to consider them all.”

Get this girl to Mars already.

The post 13 Raspberry Pis slosh-test space shuttle tanks in zero gravity appeared first on Raspberry Pi.

17000ft | The MagPi 98

How do you get internet over three miles up the Himalayas? That’s what the 17000 ft Foundation and Sujata Sahu had to figure out. Rob Zwetsloot reports in the latest issue of the MagPi magazine, out now.

Living in more urban areas of the UK, it can be easy to take for granted decent internet and mobile phone signal. In more remote areas of the country, internet can be a bit spotty but it’s nothing compared with living up in a mountain.

Tablet computers are provided that connect to a Raspberry Pi-powered network

“17000 ft Foundation is a not-for-profit organisation in India, set up to improve the lives of people settled in very remote mountainous hamlets, in areas that are inaccessible and isolated due to reasons of harsh mountainous terrain,” explains its founder, Sujata Sahu. “17000 ft has its roots in high-altitude Ladakh, a region in the desolate cold desert of the Himalayan mountain region of India. Situated in altitudes upwards of 9300 ft and with temperatures dropping to -50°C in inhabited areas, this area is home to indigenous tribal communities settled across hundreds of tiny, scattered hamlets. These villages are remote, isolated, and suffer from bare minimum infrastructure and a centuries-old civilisation unwilling but driven to migrate to faraway cities in search of a better life. Ladakh has a population of just under 300,000 people living across 60,000 km2 of harsh mountain terrain, whose sustenance and growth depends on the infrastructure, resources, and support provided by the government.”

A huge number of students have already benefited from the program

The local governments have built schools. However, they don’t have enough resources or qualified teachers to be truly effective, resulting in a problem with students dropping out or having to be sent off to cities. 17000 ft’s mission is to transform the education in these communities.

High-altitude Raspberry Pi

“The Foundation today works in over 200 remote government schools to upgrade school infrastructure, build the capacity of teachers, provide better resources for learning, thereby improving the quality of education for its children,” says Sujata. “17000 ft Foundation has designed and implemented a unique solar-powered offline digital learning solution called the DigiLab, using Raspberry Pi, which brings the power of digital learning to areas which are truly off-grid and have neither electricity nor mobile connectivity, helping children to learn better, while also enabling the local administration to monitor performance remotely.”

Each school is provided with solar power, Raspberry Pi computers to act as a local internet for the school, and tablets to connect to it. It serves as a ‘last mile connectivity’ from a remote school in the cloud, with an app on a teacher’s phone that will download data when it can and then update the installed Raspberry Pi in their school.

Remote success

“The solution has now been implemented in 120 remote schools of Ladakh and is being considered to be implemented at scale to cover the entire region,” adds Sujata. “It has now run successfully across three winters of Ladakh, withstanding even the harshest of -50°C temperatures with no failure. In the first year of its implementation alone, 5000 students were enrolled, with over 93% being active. The system has now delivered over 60,000 hours of learning to students in remote villages and improved learning outcomes.”

Not all children stay in the villages year round

It’s already helping to change education in the area during the winter. Many villages (and schools) can shut down for up to six months, and families who can’t move away are usually left without a functioning school. 17000 ft has changed this.

“In the winter of 2018 and 2019, for the first time in a few decades, parents and community members from many of these hamlets decided to take advantage of their DigiLabs and opened them up for their children to learn despite the harsh winters and lack of teachers,” Sujata explains. “Parents pooled in to provide basic heating facilities (a Bukhari – a wood- or dung-based stove with a long pipe chimney) to bring in some warmth and scheduled classes for the senior children, allowing them to learn at their own pace, with student data continuing to be recorded in Raspberry Pi and available for the teachers to assess when they got back. The DigiLab Program, which has been made possible due to the presence of the Raspberry Pi Server, has solved a major problem that the Ladakhis have been facing for years!”

Some of the village schools go unused in the winter

How can people help?

Sujata says, “17000 ft Foundation is a non-profit organisation and is dependent on donations and support from individuals and companies alike. This solution was developed by the organisation in a limited budget and was implemented successfully across over a hundred hamlets. Raspberry Pi has been a boon for this project, with its low cost and its computing capabilities which helped create this solution for such a remote area. However, the potential of Raspberry Pi is as yet untapped and the solution still needs upgrades to be able to scale to cover more schools and deliver enhanced functionality within the school. 17000 ft is very eager to help take this to other similar regions and cover more schools in Ladakh that still remain ignored. What we really need is funds and technical support to be able to reach the good of this solution to more children who are still out of the reach of Ed Tech and learning. We welcome contributions of any size to help us in this project.”

For donations from outside India, write to sujata.sahu@17000ft.org. Indian citizens can donate through 17000ft.org/donate.


The MagPi magazine is out now, available in print from the Raspberry Pi Press onlinestore, your local newsagents, and the Raspberry Pi Store, Cambridge.

You can also download the PDF directly from the MagPi magazine website.


The post 17000ft | The MagPi 98 appeared first on Raspberry Pi.

Embedding computational thinking skills in our learning resources

Learning computing is fun, creative, and exploratory. It also involves understanding some powerful ideas about how computers work and gaining key skills for solving problems using computers. These ideas and skills are collected under the umbrella term ‘computational thinking’.

When we create our online learning projects for young people, we think as much about how to get across these powerful computational thinking concepts as we do about making the projects fun and engaging. To help us do this, we have put together a computational thinking framework, which you can read right now.

What is computational thinking? A brief summary

Computational thinking is a set of ideas and skills that people can use to design systems that can be run on a computer. In our view, computational thinking comprises:

  • Decomposition
  • Algorithms
  • Patterns and generalisations
  • Abstraction
  • Evaluation
  • Data

All of these aspects are underpinned by logical thinking, the foundation of computational thinking.

What does computational thinking look like in practice?

In principle, the processes a computer performs can also be carried out by people. (To demonstrate this, computing educators have created a lot of ‘unplugged’ activities in which learners enact processes like computers do.) However, when we implement processes so that they can be run on a computer, we benefit from the huge processing power that computers can marshall to do certain types of activities.

A group of young people and educators smiling while engaging with a computer

Computers need instructions that are designed in very particular ways. Computational thinking includes the set of skills we use to design instructions computers can carry out. This skill set represents the ways we can logically approach problem solving; as computers can only solve problems using logical processes, to write programs that run on a computer, we need to use logical thinking approaches. For example, writing a computer program often requires the task the program revolves around to be broken down into smaller tasks that a computer can work through sequentially or in parallel. This approach, called decomposition, can also help people to think more clearly about computing problems: breaking down a problem into its constituent parts helps us understand the problem better.

Male teacher and male students at a computer

Understanding computational thinking supports people to take advantage of the way computers work to solve problems. Computers can run processes repeatedly and at amazing speeds. They can perform repetitive tasks that take a long time, or they can monitor states until conditions are met before performing a task. While computers sometimes appear to make decisions, they can only select from a range of pre-defined options. Designing systems that involve repetition and selection is another way of using computational thinking in practice.

Our computational thinking framework

Our team has been thinking about our approach to computational thinking for some time, and we have just published the framework we have developed to help us with this. It sets out the key areas of computational thinking, and then breaks these down into themes and learning objectives, which we build into our online projects and learning resources.

To develop this computational thinking framework, we worked with a group of academics and educators to make sure it is robust and useful for teaching and learning. The framework was also influenced by work from organisations such as Computing At School (CAS) in the UK, and the Computer Science Teachers’ Association (CSTA) in the USA.

We’ve been using the computational thinking framework to help us make sure we are building opportunities to learn about computational thinking into our learning resources. This framework is a first iteration, which we will review and revise based on experience and feedback.

We’re always keen to hear feedback from you in the community about how we shape our learning resources, so do let us know what you think about them and the framework in the comments.

The post Embedding computational thinking skills in our learning resources appeared first on Raspberry Pi.

Raspberry Pi powered e-paper display takes months to show a movie

We loved the filmic flair of Tom Whitwell‘s super slow e-paper display, which takes months to play a film in full.

Living art

His creation plays films at about two minutes of screen time per 24 hours, taking a little under three months for a 110-minute film. Psycho played in a corner of his dining room for two months. The infamous shower scene lasted a day and a half.

Tom enjoys the opportunity for close study of iconic filmmaking, but you might like this project for the living artwork angle. How cool would this be playing your favourite film onto a plain wall somewhere you can see it throughout the day?

The Raspberry Pi wearing its e-Paper HAT

Four simple steps

Luckily, this is a relatively simple project – no hardcore coding, no soldering required – with just four steps to follow if you’d like to recreate it:

  1. Get the Raspberry Pi working in headless mode without a monitor, so you can upload files and run code
  2. Connect to an e-paper display via an e-paper HAT (see above image; Tom is using this one) and install the driver code on the Raspberry Pi
  3. Use Tom’s code to extract frames from a movie file, resize and dither those frames, display them on the screen, and keep track of progress through the film
  4. Find some kind of frame to keep it all together (Tom went with a trusty IKEA number)
Living artwork: the Psycho shower scene playing alongside still artwork in Tom’s home

Affordably arty

The entire build cost £120 in total. Tom chose a 2GB Raspberry Pi 4 and a NOOBS 64gb SD Card, which he bought from Pimoroni, one of our approved resellers. NOOBS included almost all the libraries he needed for this project, which made life a lot easier.

His original post is a dream of a comprehensive walkthrough, including all the aforementioned code.

2001: A Space Odyssey would take months to play on Tom’s creation

Head to the comments section with your vote for the creepiest film to watch in ultra slow motion. I came over all peculiar imaging Jaws playing on my living room wall for months. Big bloody mouth opening slooooowly (pales), big bloody teeth clamping down slooooowly (heart palpitations). Yeah, not going to try that. Sorry Tom.

The post Raspberry Pi powered e-paper display takes months to show a movie appeared first on Raspberry Pi.

Raspberry Pi turns retro radio into interactive storyteller

8 Bits and a Byte created this voice-controllable, interactive, storytelling device, hidden inside a 1960s radio for extra aesthetic wonderfulness.

A Raspberry Pi 3B works with an AIY HAT, a microphone, and the device’s original speaker to run chatbot and speech-to-text artificial intelligence.

This creature is a Bajazzo TS made by Telefunken some time during the 1960s in West Germany, and this detail inspired the espionage-themed story that 8 Bits and a Byte retrofitted it to tell. Users are intelligence agents whose task is to find the evil Dr Donogood.

Out with the old electronics

The device works like one of those ‘choose your own adventure’ books, asking you a series of questions and offering you several options. The story unfolds according to the options you choose, and leads you to a choice of endings.

In with the new (Raspberry Pi tucked in the lower right corner)

What’s the story?

8 Bits and a Byte designed a decision tree to provide a tight story frame, so users can’t go off on question-asking tangents.

When you see the ‘choose your own adventure’ frame set out like this, you can see how easy it is to create something that feels interactive, but really only needs to understand the difference between a few phrases: ‘laser pointer’; ‘lockpick’; ‘drink’; take bribe’, and ‘refuse bribe’.

How does it interact with the user?

Skip to 03mins 30secs to see the storytelling in action

Google Dialogflow is a free natural language understanding platform that makes it easy to design a conversational user interface, which is long-speak for ‘chatbot’.

There are a few steps between the user talking to the radio, and the radio figuring out how to respond. The speech-to-text and chatbot software need to work in tandem. For this project, the data flow runs like so:

1: The microphone detects that someone is speaking and records the audio.

2-3: Google AI (the Speech-To-Text box) processes the audio and extracts the words the user spoke as text.

4-5: The chatbot (Google Dialogflow) receives this text and matches it with the correct response, which is sent back to the Raspberry Pi.

6-7: Some more artificial intelligence uses this text to generate artificial speech.

8: This audio is played to the user via the speaker.

Make sure to check out more of 8 Bits and a Byte’s projects on YouTube. We recommend Mooomba the cow roomba.

The post Raspberry Pi turns retro radio into interactive storyteller appeared first on Raspberry Pi.

Code a GUI live with Digital Making at Home

This week, we’re introducing young people around the world to coding GUIs, or graphical user interfaces. Let them tune in this Wednesday at 5.30pm BST / 12.30pm EDT / 10.00pm IST for a fun live stream code-along session with Christina and special guest Martin! They’ll learn about GUIs, can ask us questions, and get to code a painting app.

For beginner coders, we have our Thursday live stream at 3.30pm PDT / 5.30pm CDT / 6.30pm EDT, thanks to support from Infosys Foundation USA! Christina will share more fun Scratch coding for beginners.

Now that school is back in session for many young people, we’ve wrapped up our weekly code-along videos. You and your children can continue coding with us during the live stream, whether you join us live or watch the recorded session on-demand. Thanks to everyone who watched our more than 90 videos and 45 hours of digital making content these past month!

The post Code a GUI live with Digital Making at Home appeared first on Raspberry Pi.

Build an arcade cabinet | Hackspace 35

Games consoles might be fast and have great graphics, but they’re no match for the entertainment value of a proper arcade machine. In this month’s issue of Hackspace magazine, you’re invited to relive your misspent youth with this huge build project.

There’s something special about the comforting solidity of a coin-eating video game monolith, and nothing screams retro fun like a full-sized arcade cabinet sitting in the corner of the room. Classic arcade machines can be a serious investment. Costing thousands of pounds and weighing about the same as a giant panda, they’re out of reach for all but the serious collector. Thankfully, you can recreate that retro experience using modern components for a fraction of the price and weight.

An arcade cabinet is much easier to make than you might expect. It’s essentially a fancy cupboard that holds a monitor, speakers, a computer, a keyboard, and some buttons. You can make your own cabinet using not much more than a couple of sheets of MDF, some clear plastic, and a few cans of spray paint.

If you want a really authentic-looking cabinet, you can find plenty of plans and patterns online. However, most classic cabinets are a bit bigger than you might remember, occupying almost a square metre of floor space. If you scale that down to approximately 60 cm2, you can make an authentic-looking home arcade cabinet that won’t take over the entire room, and can be cut from just two pieces of 8 × 4 (2440 mm × 1220 mm) MDF. You can download our plans, but these are rough plans designed for you to tweak into your own creation. A sheet of 18 mm MDF is ideal for making the body of the cabinet, and 12 mm MDF works well to fill in the front and back panels. You can use thinner sheets of wood to make a lighter cabinet, but you might find it less sturdy and more difficult to screw into.

The sides of the machine should be cut from 18 mm MDF, and will be 6 feet high. The sides need to be as close to identical as possible, so mark out the pattern for the side on one piece of 18 mm MDF, and screw the boards together to hold them while you cut. You can avoid marking the sides by placing the screws through the waste areas of the MDF. Keep these offcuts to make internal supports or brackets. You can cut the rest of the pieces of MDF using the project plans as a guide. 

Why not add a coin machine for extra authenticity

Attach the side pieces to the base, so that the sides hang lower than the base by an inch or two. If you’re more accomplished at woodworking and want to make the strongest cabinet possible, you can use a router to joint and glue the pieces of wood together. This will make the cabinet very slightly narrower and will affect some measurements, but if you follow the old adage to measure twice and cut once, you should be fine. If you don’t want to do this, you can use large angle brackets and screws to hold everything together. The cabinet will still be strong, and you’ll have the added advantage that you can disassemble it in the future if necessary.

Keep attaching the 18 mm MDF pieces, starting with the top piece and the rear brace. Once you have these pieces attached, the cabinet should be sturdy enough to start adding the thinner panels. Insetting the panels by about an inch gives the cabinet that retro look, and also hides any design crimes you might have committed while cutting out the side panels.

The absolute sizing of the cabinet isn’t critical unless you’re trying to make an exact copy of an old machine, so don’t feel too constrained by measuring things down to the millimetre. As long as the cabinet is wide enough to accept your monitor, everything else is moveable and can be adjusted to suit your needs.

Make it shiny

You can move onto decoration once the cabinet woodwork is fitted together. This is mostly down to personal preference, although it’s wise to think about which parts of the case will be touched more often, and whether your colour choices will cause any problems with screen reflection. Matt black is a popular choice for arcade cabinets because it’s non-reflective and any surface imperfections are less noticeable with a matt paint finish.

Aluminium checker plate is a good way of protecting your cabinet from damage, and it can be cut and shaped easily.

Wallpaper or posters make a great choice for decorating the outside of the cabinet, and they are quick to apply. Just be sure to paste all the way up to the edge, and protect any areas that will be handled regularly with aluminium checker plate or plastic sheet. The edges of MDF sheets can be finished with iron-on worktop edging, or with the chrome detailing tape used on cars. You can buy detailing tape in 12 mm and 18 mm widths, which makes it great for finishing edges. The adhesive tape provided with the chrome edging isn’t always very good, so it’s worth investing in some high-strength, double-sided clear vinyl foam tape.

You’ve made your cabinet, but it’s empty at the moment. You’re going to add a Raspberry Pi, monitor, speakers, and a panel for buttons and joysticks. To find out how, you can read the full article in HackSpace magazine 35.  

Get HackSpace magazine 35 Out Now!

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

If you subscribe for 12 months, you get an Adafruit Circuit Playground Express , or can choose from one of our other subscription offers, including this amazing limited-time offer of three issues and a book for only £10!

The post Build an arcade cabinet | Hackspace 35 appeared first on Raspberry Pi.

How is computing taught in schools around the world?

Around the world, formal education systems are bringing computing knowledge to learners. But what exactly is set down in different countries’ computing curricula, and what are classroom educators teaching? This was the topic of the first in the autumn series of our Raspberry Pi research seminars on Tuesday 8 September.

A glowing globe floating above an open hand in the dark

We heard from an international team (Monica McGill , USA; Rebecca Vivian, Australia; Elizabeth Cole, Scotland) who represented a group of researchers also based in England, Malta, Ireland, and Italy. As a researcher working at the Raspberry Pi Foundation, I myself was part of this research group. The group developed METRECC, a comprehensive and validated survey tool that can be used to benchmark and measure developments of the teaching and learning of computing in formal education systems around the world. Monica, Rebecca, and Elizabeth presented how the research group developed and validated the METRECC tool, and shared some findings from their pilot study.

What’s in a curriculum? Developing a survey tool

Those of us who work or have worked in school education use the word ‘curriculum’ frequently, although it’s an example of education terminology that means different things in different contexts, and to different people. Following Porter and Smithson (2001)1, we can distinguish between the intended curriculum and the enacted curriculum:

  • Intended curriculum: Policy tools as curriculum standards, frameworks, or guidelines that outline the curriculum teachers are expected to deliver.
  • Enacted curriculum: Actual curricular content in which students engage in the classroom, and adopted pedagogical approaches; for computer science (CS) curricula, this also includes students’ use of technology, physical computing devices, and tools in CS lessons.

To compare the intended and enacted computing curriculum in as many countries as possible, at particular points in time, the research group Monica, Rebecca, Elizabeth, and I were part of developed the METRECC survey tool.

A classroom of students in North America

METRECC stands for MEasuring TeacheREnacted Computing Curriculum. The METRECC survey has 11 categories of questions and is designed to be completed by computing teachers within 35–40 minutes. Following best practice in research, which calls for standardised research instruments, the research group ensured that the survey produces valid, reliable results (meaning that it works as intended) before using it to gather data.

Using METRECC in a pilot study

In their pilot study, the research group gathered data from 7 countries. The intended curriculum for each country was determined by examining standards and policies in place for each country/state under consideration. Teachers’ answers in the METRECC survey provided the countries’ enacted curricula. (The complete dataset from the pilot study is publicly available at csedresearch.org, a very useful site for CS education researchers where many surveys are shared.)

Two girls coding at a computer under supervision of a female teacher

The researchers then mapped the intended to the enacted curricula to find out whether teachers were actually teaching the topics that were prescribed for them. Overall, the results of the mapping showed that there was a good match between intended and enacted curricula. Examples of mismatches include lower numbers of primary school teachers reporting that they taught visual or symbolic programming, even though the topic did appear on their curriculum.

A table listing computer science topics
This table shows computer science topic the METRECC tool asks teachers about, and what percentage of respondents in the pilot study stated that they teach these to their students.

Another aspect of the METRECC survey allows to measure teachers’ confidence, self-efficacy, and self-esteem. The results of the pilot study showed a relationship between years of experience and CS self-esteem; in particular, after four years of teaching, teachers started to report high self-esteem in relation to computer science. Moreover, primary teachers reported significantly lower self-esteem than secondary teachers did, and female teachers reported lower self-esteem than male teachers did.

Adapting the survey’s language

The METRECC survey has also been used in South Asia, namely Bangladesh, Nepal, Pakistan, and Sri Lanka (where computing is taught under ICT). Amongst other things, what the researchers learned from that study was that some of the survey questions needed to be adapted to be relevant to these countries. For example, while in the UK we use the word ‘gifted’ to mean ‘high-attaining’, in the South Asian countries involved in the study, to be ‘gifted’ means having special needs.

Two girls coding at a computer under supervision of a female teacher

The study highlighted how important it is to ensure that surveys intended for an international audience use terminology and references that are pertinent to many countries, or that the survey language is adapted in order to make sense in each context it is delivered. 

Let’s keep this monitoring of computing education moving forward!

The seminar presentation was well received, and because we now hold our seminars for 90 minutes instead of an hour, we had more time for questions and answers.

My three main take-aways from the seminar were:

1. International collaboration is key

It is very valuable to be able to form international working groups of researchers collaborating on a common project; we have so much to learn from each other. Our Raspberry Pi research seminars attract educators and researchers from many different parts of the world, and we can truly push the field’s understanding forward when we listen to experiences and lessons of people from diverse contexts and cultures.

2. Making research data publicly available

Increasingly, it is expected that research datasets are made available in publicly accessible repositories. While this is becoming the norm in healthcare and scientific, it’s not yet as prevalent in computing education research. It was great to be able to publicly share the dataset from the METRECC pilot study, and we encourage other researchers in this field to do the same. 

3. Extending the global scope of this research

Finally, this work is only just beginning. Over the last decade, there has been an increasing move towards teaching aspects of computer science in school in many countries around the world, and being able to measure change and progress is important. Only a handful of countries were involved in the pilot study, and it would be great to see this research extend to more countries, with larger numbers of teachers involved, so that we can really understand the global picture of formal computing education. Budding research students, take heed!

Next up in our seminar series

If you missed the seminar, you can find the presentation slides and a recording of the researchers’ talk on our seminars page.

In our next seminar on Tuesday 6 October at 17:00–18:30 BST / 12:00–13:30 EDT / 9:00–10:30 PT / 18:00–19:30 CEST, we’ll welcome Shuchi Grover, a prominent researcher in the area of computational thinking and formative assessment. The title of Shuchi’s seminar is Assessments to improve student learning in introductory CS classrooms. To join, simply sign up with your name and email address.

Once you’ve signed up, we’ll email you the seminar meeting link and instructions for joining. If you attended this past seminar, the link remains the same.


1. Andrew C. Porter and John L. Smithson. 2001. Defining, Developing and Using Curriculum Indicators. CPRE Research Reports, 12-2001. (2001)

The post How is computing taught in schools around the world? appeared first on Raspberry Pi.

Raspberry Pi enables world’s smallest iMac

This project goes a step further than most custom-made Raspberry Pi cases: YouTuber Michael Pick hacked a Raspberry Pi 4 and stuffed it inside this Apple lookalike to create the world’s smallest ‘iMac’.

Michael designed and 3D printed this miniature ‘iMac’ with what he calls a “gently modified” Raspberry Pi 4 at the heart. Everything you see is hand-painted and -finished to achieve an authentic, sleek Apple look.

This is “gentle modification” we just mentioned

Even after all that power tool sparking, this miniature device is capable of playing Minecraft at 1000 frames per second. Michael was set on making the finished project as thin as possible, so he had to slice off a couple of his Raspberry Pi’s USB ports and the Ethernet socket to make everything fit inside the tiny, custom-made case. This hacked setup leaves you with Bluetooth and wireless internet connections, which, as Michael explains in the build video, “if you’re a Mac user, that’s all you’re ever going to need.”

We love watching 3D printer footage set to relaxed elevator music

This teeny yet impactful project has even been featured on forbes.com, and that’s where we learned how the tightly packed tech manages to work in such a restricted space:

“A wireless dongle is plugged into one of the remaining USB ports to ensure it’s capable of connecting to a wireless keyboard and mouse, and a low-profile ribbon cable is used to connect the display to the Raspberry Pi. Careful crimping of cables and adapters ensures the mini iMac can be powered from a USB-C extension cable that feeds in under the screen, while the device also includes a single USB 2 port.”

Barry Collins | forbes.com

The maker also told forbes.com that this build was inspired by an iRaspbian software article from tech writer Barry Collins. iRaspbian puts a Mac-like interface — including Dock, Launcher and even the default macOS wallpaper — on top of a Linux distro. We guess Michael just wanted the case to match the content, hey?

Check out Michael’s YouTube channel for more inexplicably cool builds, such as a one billion volt Thor hammer.

The post Raspberry Pi enables world’s smallest iMac appeared first on Raspberry Pi.

Global sunrise/sunset Raspberry Pi art installation

24h Sunrise/Sunset is a digital art installation that displays a live sunset and sunrise happening somewhere in the world with the use of CCTV.

Image by fotoswiss.com

Artist Dries Depoorter wanted to prove that “CCTV cameras can show something beautiful”, and turned to Raspberry Pi to power this global project.

Image by fotoswiss.com

Harnessing CCTV

The arresting visuals are beamed to viewers using two Raspberry Pi 3B+ computers and an Arduino Nano Every that stream internet protocol (IP) cameras with the use of command line media player OMXPlayer.

Dual Raspberry Pi power

The two Raspberry Pis communicate with each other using the MQTT protocol — a standard messaging protocol for the Internet of Things (IoT) that’s ideal for connecting remote devices with a small code footprint and minimal network bandwidth.

One of the Raspberry Pis checks at which location in the world a sunrise or sunset is happening and streams the closest CCTV camera.

The insides of the sleek display screen…

Beam me out, Scotty

The big screens are connected with the I2C protocol to the Arduino, and the Arduino is connected serial with the second Raspberry Pi. Dries also made a custom printed circuit board (PCB) so the build looks cleaner.

All that hardware is powered by an industrial power supply, just because Dries liked the style of it.

…and the outside

Software

Everything is written in Python 3, and Dries harnessed the Python 3 libraries BeautifulSoup, Sun, Geopy, and Pytz to calculate sunrise and sunset times at specific locations. Google Firebase databases in the cloud help with admin by way of saving timestamps and the IP addresses of the cameras.

Hardware

The artist stood infront of the two large display screens
Image of the artist with his work by fotoswiss.com

And, lastly, Dries requested a shoutout for his favourite local Raspberry Pi shop Gotron in Ghent.

If you’d like to check out more of Dries’ work, you can find him online here or on Instagram.

The post Global sunrise/sunset Raspberry Pi art installation appeared first on Raspberry Pi.

How young people can run their computer programs in space with Astro Pi

Do you know young people who dream of sending something to space? You can help them make that dream a reality!

We’re calling on educators, club leaders, and parents to inspire young people to develop their digital skills by participating in this year’s European Astro Pi Challenge.

The European Astro Pi Challenge, which we run in collaboration with the European Space Agency, gives young people in 26 countries* the opportunity to write their own computer programs and run them on two special Raspberry Pi units — called Astro Pis! — on board the International Space Station (ISS).

This year’s Astro Pi ambassador is ESA astronaut Thomas Pesquet. Thomas will accompany our Astro Pis on the ISS and oversee young people’s programs while they run.

And the young people need your support to take part in the Astro Pi Challenge!

A group of young people and educators smiling while engaging with a computer

Astro Pi is back big-time!

The Astro Pi Challenge is back and better than ever, with a brand-new website, a cool new look, and the chance for more young people to get involved.

Logo of the European Astro Pi Challenge

During the last challenge, a record 6558 Astro Pi programs from over 17,000 young people ran on the ISS, and we want even more young people to take part in our new 2020/21 challenge.

British ESA astronaut Tim Peake was the ambassador of the first Astro Pi Challenge in 2015.

So whether your children or learners are complete beginners to programming or have experience of Python coding, we’d love for them to take part!

You and your young people have two Astro Pi missions to choose from: Mission Zero and Mission Space Lab.

Mission Zero — for beginners and younger programmers

In Mission Zero, young people write a simple program to take a humidity reading onboard the ISS and communicate it to the astronauts with a personalised message, which will be displayed for 30 seconds.

Logo of Mission Zero, part of the European Astro Pi Challenge

Mission Zero is designed for beginners and younger participants up to 14 years old. Young people can complete Mission Zero online in about an hour following a step-by-step guide. Taking part doesn’t require any previous coding experience or specific hardware.

All Mission Zero participants who follow the simple challenge rules are guaranteed to have their programs run aboard the ISS in 2021.

All you need to do is support the young people to submit their programs!

Mission Zero is a perfect activity for beginners to digital making and Python programming, whether they’re young people at home or in coding clubs, or groups of students or club participants.

We have made some exciting changes to this year’s Mission Zero challenge:

  1. Participants will be measuring humidity on the ISS instead of temperature
  2. For the first time, young people can enter individually, as well as in teams of up to 4 people

You have until 19 March 2021 to support your young people to submit their Mission Zero programs!

Mission Space Lab — for young people with programming experience

In Mission Space Lab, teams of young people design and program a scientific experiment to run for 3 hours onboard the ISS.

Logo of Mission Space Lab, part of the European Astro Pi Challenge

Mission Space Lab is aimed at more experienced or older participants up to 19 years old, and it takes place in 4 phases over the course of 8 months.

Your role in Mission Space Lab is to mentor a team of participants while they design and write a program for a scientific experiment that increases our understanding of either life on Earth or life in space.

The best experiments will be deployed to the ISS, and teams will have the opportunity to analyse their experimental data and report on their results.

You have until 23 October 2020 to register your team and their experiment idea.

To see the kind of experiments young people have run on the ISS, check out our blog post congratulating the Mission Space Lab 2019/20 winners!

Get started with Astro Pi today!

To find out more about taking part in the European Astro Pi Challenge 2020/21, head over to our new and improved astro-pi.org website.

screenshot of Astro Pi home page

There, you’ll find everything you need to get started on sending young people’s computer program to space!


* ESA Member States in 2020: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Luxembourg, the Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland, Latvia, and the United Kingdom. Other participating states: Canada, Latvia, Slovenia, Malta.

The post How young people can run their computer programs in space with Astro Pi appeared first on Raspberry Pi.

Coding for concentration with Digital Making at Home

September is wellness month at Digital Making at Home. Your young makers can code along with our educators every week to create projects that focus on their well-being. This week’s brand-new projects are all about helping young people concentrate better.

Through Digital Making at Home, we invite parents and kids all over the world to code and make along with us and our new projects, videos, and live streams every week.

This week’s live stream will take place on Wednesday at 5.30pm BST / 12.30pm EDT / 10.00pm IST at rpf.io/home. Let your kids join in so they can progress to the next stage of learning to code with Scratch!

If you’re in the USA, your young people can join Christina on Thursday at 3.30pm PDT / 5.30pm CDT / 6.30pm EDT for an additional US-time live stream! Christina will show newcomers how to begin coding Scratch projects. Thanks to our partners Infosys Foundation USA for making this new live stream possible.

The post Coding for concentration with Digital Making at Home appeared first on Raspberry Pi.

What the blink is my IP address?

Picture the scene: you have a Raspberry Pi configured to run on your network, you power it up headless (without a monitor), and now you need to know which IP address it was assigned.

Matthias came up with this solution, which makes your Raspberry Pi blink its IP address, because he used a Raspberry Pi Zero W headless for most of his projects and got bored with having to look it up with his DHCP server or hunt for it by pinging different IP addresses.

How does it work?

A script runs when you start your Raspberry Pi and indicates which IP address is assigned to it by blinking it out on the device’s LED. The script comprises about 100 lines of Python, and you can get it on GitHub.

A screen running Python
Easy peasy GitHub breezy

The power/status LED on the edge of the Raspberry Pi blinks numbers in a Roman numeral-like scheme. You can tell which number it’s blinking based on the length of the blink and the gaps between each blink, rather than, for example, having to count nine blinks for a number nine.

Blinking in Roman numerals

Short, fast blinks represent the numbers one to four, depending on how many short, fast blinks you see. A gap between short, fast blinks means the LED is about to blink the next digit of the IP address, and a longer blink represents the number five. So reading the combination of short and long blinks will give you your device’s IP address.

You can see this in action at this exact point in the video. You’ll see the LED blink fast once, then leave a gap, blink fast once again, then leave a gap, then blink fast twice. That means the device’s IP address ends in 112.

What are octets?

Luckily, you usually only need to know the last three numbers of the IP address (the last octet), as the previous octets will almost always be the same for all other computers on the LAN.

The script blinks out the last octet ten times, to give you plenty of chances to read it. Then it returns the LED to its default functionality.

Which LED on which Raspberry Pi?

On a Raspberry Pi Zero W, the script uses the green status/power LED, and on other Raspberry Pis it uses the green LED next to the red power LED.

The green LED blinking the IP address (the red power LED is slightly hidden by Matthias’ thumb)

Once you get the hang of the Morse code-like blinking style, this is a really nice quick solution to find your device’s IP address and get on with your project.

The post What the blink is my IP address? appeared first on Raspberry Pi.

Turn a watermelon into a RetroPie games console

OK Cedrick, we don’t need to know why, but we have to know how you turned a watermelon into a games console.

This has got to be a world first. What started out as a regular RetroPie project has blown up reddit due to the unusual choice of casing for the games console: nearly 50,000 redditors upvoted this build within a week of Cedrick sharing it.

See, we’re not kidding

What’s inside?

  • Raspberry Pi 3
  • Jingo Dot power bank (that yellow thing you can see below)
  • Speakers
  • Buttons
  • Small 1.8″ screen
Cedrick’s giggling really makes this video

Retropie

While this build looks epic, it isn’t too tricky to make. First, Cedrick flashed the RetroPie image onto an SD card, then he wired up a Raspberry Pi’s GPIO pins to the red console buttons, speakers, and the screen.

Cedrick achieved audio output by adding just a few lines of code to the config file, and he downloaded libraries for screen configuration and button input. That’s it! That’s all you need to get a games console up and running.

Cedrick just hanging on the train with his WaterBoy

Now for the messy bit

Cedrick had to gut an entire watermelon before he could start getting all the hardware in place. He power-drilled holes for the buttons to stick through, and a Stanley knife provided the precision he needed to get the right-sized gap for the screen.

A gutted watermelon with gaps cut to fit games console buttons and a screen

Rather than drill even more holes for the speakers, Cedrick stuck them in place inside the watermelon using toothpicks. He did try hot glue first but… yeah. Turns out fruit guts are impervious to glue.

Moisture was going to be a huge problem, so to protect all the hardware from the watermelon’s sticky insides, Cedrick lined it with plastic clingfilm.

Infinite lives

And here’s how you can help: Cedrick is open to any tips as to how to preserve the perishable element of his project: the watermelon. Resin? Vaseline? Time machine? How can he keep the watermelon fresh?

Share your ideas on reddit or YouTube, and remember to subscribe to see more of Cedrick’s maverick making in the wild.

The post Turn a watermelon into a RetroPie games console appeared first on Raspberry Pi.

It’s a brand-new NODE Mini Server!

NODE has long been working to create open-source resources to help more people harness the decentralised internet, and their easily 3D-printed designs are perfect to optimise your Raspberry Pi.

NODE wanted to take advantage of the faster processor and up to 8GB RAM on Raspberry Pi 4 when it came out last year. Now that our tiny computer is more than capable of being used as as a general Linux desktop system, the NODE Mini Server version 3 has been born.

As for previous versions of NODE’s Mini Server, one of their main goals for this new iteration was to package Raspberry Pi in a way which makes it a little easier to use as a regular mini server or computer. In other words, it’s put inside a neat little box with all the ports accessible on one side.

Black is incredibly slimming

Slimmer and simpler

The latest design is simplified compared to previous versions. Everything lives in a 92mm × 92mm enclosure that isn’t much thicker than Raspberry Pi itself.

The slimmed-down new case comprises a single 3D-printed piece and a top cover made from a custom-designed printed circuit board (PCB) that has four brass-threaded inserts soldered into the corners, giving you a simple way to screw everything together.

The custom PCB cover

What are the new features?

Another goal for version 3 NODE’s Mini Server was to include as much modularity as possible. That’s why this new mini server requires no modifications to the Raspberry Pi itself, thanks to a range of custom-designed adapter boards. How to take advantage of all these new features is explained at this point in NODE’s YouTube video.

Ooh, shiny and new and new and shiny

Just like for previous versions, all the files and a list of the components you need to create your own Mini Server are available for free on the NODE website.

Leave comments on NODE’s YouTube video if you’d like to create and sell your own Mini Server kits or pre-made servers. NODE is totally open to showcasing any add-ons or extras you come up with yourself.

Looking ahead, making the Mini Server stackable and improving fan circulation is next on NODE’s agenda.

The post It’s a brand-new NODE Mini Server! appeared first on Raspberry Pi.

Give your voice assistant a retro Raspberry Pi makeover

Do you feel weird asking the weather or seeking advice from a faceless device? Would you feel better about talking to a classic 1978 2-XL educational robot from Mego Corporation? Matt over at element14 Community, where tons of interesting stuff happens, has got your back.

Watch Matt explain how the 2-XL toy robot worked before he started tinkering with it. This robot works with Google Assistant on a Raspberry Pi, and answers to a custom wake word.

Kit list

Our recent blog about repurposing a Furby as a voice assistant device would have excited Noughties kids, but this one is mostly for our beautiful 1970s- and 1980s-born fanbase.

Time travel

2-XL, Wikipedia tells us, is considered the first “smart toy”, marketed way back in 1978, and exhibiting “rudimentary intelligence, memory, gameplay, and responsiveness”. 2-XL had a personality that kept kids’ attention, telling jokes and offering verbal support as they learned.

Teardown

Delve under the robot’s armour to see how the toy was built, understand the basic working mechanism, and watch Matt attempt to diagnose why his 2-XL is not working.

Setting up Google Assistant

The Matrix Creator daughter board mentioned in the kit list is an ideal platform for developing your own AI assistant. It’s the daughter board’s 8-microphone array that makes it so brilliant for this task. Learn how to set up Google Assistant on the Matrix board in this video.

What if you don’t want to wake your retrofit voice assistant in the same way as all the other less dedicated users, the ones who didn’t spend hours of love and care refurbishing an old device? Instead of having your homemade voice assistant answer to “OK Google” or “Alexa”, you can train it to recognise a phrase of your choice. In this tutorial, Matt shows you how to set up a custom wake word with your voice assistant, using word detection software called Snowboy.

Keep an eye on element14 on YouTube for the next instalment of this excellent retrofit project.

The post Give your voice assistant a retro Raspberry Pi makeover appeared first on Raspberry Pi.

Nandu’s lockdown Raspberry Pi robot project

Nandu Vadakkath was inspired by a line-following robot built (literally) entirely from salvage materials that could wait patiently and purchase beer for its maker in Tamil Nadu, India. So he set about making his own, but with the goal of making it capable of slightly more sophisticated tasks.

“Robot, can you play a song?”

Hardware

Robot comes when called, and recognises you as its special human

Software

Nandu had ambitious plans for his robot: navigation, speech and listening, recognition, and much more were on the list of things he wanted it to do. And in order to make it do everything he wanted, he incorporated a lot of software, including:

Robot shares Nandu’s astrological chart
  • Python 3
  • virtualenv, a tool for creating isolating virtual Python environments
  • the OpenCV open source computer vision library
  • the spaCy open source natural language processing library
  • the TensorFlow open source machine learning platform
  • Haar cascade algorithms for object detection
  • A ResNet neural network with the COCO dataset for object detection
  • DeepSpeech, an open source speech-to-text engine
  • eSpeak NG, an open source speech synthesiser
  • The MySQL database service

So how did Nandu go about trying to make the robot do some of the things on his wishlist?

Context and intents engine

The engine uses spaCy to analyse sentences, classify all the elements it identifies, and store all this information in a MySQL database. When the robot encounters a sentence with a series of possible corresponding actions, it weighs them to see what the most likely context is, based on sentences it has previously encountered.

Getting to know you

The robot has been trained to follow Nandu around but it can get to know other people too. When it meets a new person, it takes a series of photos and processes them in the background, so it learns to remember them.

Nandu's home made robot
There she blows!

Speech

Nandu didn’t like the thought of a basic robotic voice, so he searched high and low until he came across the MBROLA UK English voice. Have a listen in the videos above!

Object and people detection

The robot has an excellent group photo function: it looks for a person, calculates the distance between the top of their head and the top of the frame, then tilts the camera until this distance is about 60 pixels. This is a lot more effort than some human photographers put into getting all of everyone’s heads into the frame.

Nandu has created a YouTube channel for his robot companion, so be sure to keep up with its progress!

The post Nandu’s lockdown Raspberry Pi robot project appeared first on Raspberry Pi.

Explore well-being in September with Digital Making at Home

September is wellness month at Digital Making at Home. Your young makers can code along with our educators every week to create projects which focus on their well-being. This week’s brand-new projects are all about embracing the things that make you feel calm.

Start coding with our all-new projects now!

Through Digital Making at Home, we invite parents and kids all over the world to code and make along with us and our new projects, videos, and live streams every week.

This week’s live stream will take place on Wednesday at 5.30pm BST / 12.30pm EDT / 10.00pm IST at rpf.io/home. Let your kids join in so they can progress to the next stage of learning to code with Scratch!

The post Explore well-being in September with Digital Making at Home appeared first on Raspberry Pi.

Recreate Q*bert’s cube-hopping action | Wireframe #42

Code the mechanics of an eighties arcade hit in Python and Pygame Zero. Mark Vanstone shows you how

Players must change the colour of every cube to complete the level.

Late in 1982, a funny little orange character with a big nose landed in arcades. The titular Q*bert’s task was to jump around a network of cubes arranged in a pyramid formation, changing the colours of each as they went. Once the cubes were all the same colour, it was on to the next level; to make things more interesting, there were enemies like Coily the snake, and objects which helped Q*bert: some froze enemies in their tracks, while floating discs provided a lift back to the top of the stage.

Q*bert was designed by Warren Davis and Jeff Lee at the American company Gottlieb, and soon became such a smash hit that, the following year, it was already being ported to most of the home computer platforms available at the time. New versions and remakes continued to appear for years afterwards, with a mobile phone version appearing in 2003. Q*bert was by far Gottlieb’s most popular game, and after several changes in company ownership, the firm is now part of Sony’s catalogue – Q*bert’s main character even made its way into the 2015 film, Pixels.

Q*bert uses isometric-style graphics to draw a pseudo-3D display – something we can easily replicate in Pygame Zero by using a single cube graphic with which we make a pyramid of Actor objects. Starting with seven cubes on the bottom row, we can create a simple double loop to create the pile of cubes. Our Q*bert character will be another Actor object which we’ll position at the top of the pile to start. The game screen can then be displayed in the draw() function by looping through our 28 cube Actors and then drawing Q*bert.

Our homage to Q*bert. Try not to fall into the terrifying void.

We need to detect player input, and for this we use the built-in keyboard object and check the cursor keys in our update() function. We need to make Q*bert move from cube to cube so we can move the Actor 32 pixels on the x-axis and 48 pixels on the y-axis. If we do this in steps of 2 for x and 3 for y, we will have Q*bert on the next cube in 16 steps. We can also change his image to point in the right direction depending on the key pressed in our jump() function. If we use this linear movement in our move() function, we’ll see the Actor go in a straight line to the next block. To add a bit of bounce to Q*bert’s movement, we add or subtract (depending on the direction) the values in the bounce[] list. This will make a bit more of a curved movement to the animation.

Now that we have our long-nosed friend jumping around, we need to check where he’s landing. We can loop through the cube positions and check whether Q*bert is over each one. If he is, then we change the image of the cube to one with a yellow top. If we don’t detect a cube under Q*bert, then the critter’s jumped off the pyramid, and the game’s over. We can then do a quick loop through all the cube Actors, and if they’ve all been changed, then the player has completed the level. So those are the basic mechanics of jumping around on a pyramid of cubes. We just need some snakes and other baddies to annoy Q*bert – but we’ll leave those for you to add. Good luck!

Here’s Mark’s code for a Q*bert-style, cube-hopping platform game. To get it running on your system, you’ll need to install Pygame Zero. And to download the full code and assets, head here.

Get your copy of Wireframe issue 42

You can read more features like this one in Wireframe issue 42, available directly from Raspberry Pi Press — we deliver worldwide.

And if you’d like a handy digital version of the magazine, you can also download issue 42 for free in PDF format.

Make sure to follow Wireframe on Twitter and Facebook for updates and exclusive offers and giveaways. Subscribe on the Wireframe website to save up to 49% compared to newsstand pricing!

The post Recreate Q*bert’s cube-hopping action | Wireframe #42 appeared first on Raspberry Pi.

Raspberry Pi retro player

We found this project at TeCoEd and we loved the combination of an OLED display housed inside a retro Argus slide viewer. It uses a Raspberry Pi 3 with Python and OpenCV to pull out single frames from a video and write them to the display in real time.​

TeCoEd names this creation the Raspberry Pi Retro Player, or RPRP, or – rather neatly – RP squared. The Argus viewer, he tells us, was a charity-shop find that cost just 50p.  It sat collecting dust for a few years until he came across an OLED setup guide on hackster.io, which inspired the birth of the RPRP.

Timelapse of the build and walk-through of the code

At the heart of the project is a Raspberry Pi 3 which is running a Python program that uses the OpenCV computer vision library.  The code takes a video clip and breaks it down into individual frames. Then it resizes each frame and converts it to black and white, before writing it to the OLED display. The viewer sees the video play in pleasingly retro monochrome on the slide viewer.

Tiny but cute, like us!

TeCoEd ran into some frustrating problems with the OLED display, which, he discovered, uses the SH1106 driver, rather than the standard SH1306 driver that the Adafruit CircuitPython library expects. Many OLED displays use the SH1306 driver, but it turns out that cheaper displays like the one in this project use the SH1106. He has made a video to spare other makers this particular throw-it-all-in-the-bin moment.

Tutorial for using the SH1106 driver for cheap OLED displays

If you’d like to try this build for yourself, here’s all the code and setup advice on GitHub.

Wiring diagram

TeCoEd is, as ever, our favourite kind of maker – the sharing kind! He has collated everything you’ll need to get to grips with OpenCV, connecting the SH1106 OLED screen over I2C, and more. He’s even told us where we can buy the OLED board.

The post Raspberry Pi retro player appeared first on Raspberry Pi.

Raspberry Pi + Furby = ‘Furlexa’ voice assistant

How can you turn a redundant, furry, slightly annoying tech pet into a useful home assistant? Zach took to howchoo to show you how to combine a Raspberry Pi Zero W with Amazon’s Alexa Voice Service software and a Furby to create Furlexa.

Furby was pretty impressive technology, considering that it’s over 20 years old. It could learn to speak English, sort of, by listening to humans. It communicated with other Furbies via infrared sensor. It even slept when its light sensor registered that it was dark.

Furby innards, exploded

Zach explains why Furby is so easy to hack:

Furby is comprised of a few primary components — a microprocessor, infrared and light sensors, microphone, speaker, and — most impressively — a single motor that uses an elaborate system of gears and cams to drive Furby’s ears, eyes, mouth and rocker. A cam position sensor (switch) tells the microprocessor what position the cam system is in. By driving the motor at varying speeds and directions and by tracking the cam position, the microprocessor can tell Furby to dance, sing, sleep, or whatever.

The original CPU and related circuitry were replaced with a Raspberry Pi Zero W

Zach continues: “Though the microprocessor isn’t worth messing around with (it’s buried inside a blob of resin to protect the IP), it would be easy to install a small Raspberry Pi computer inside of Furby, use it to run Alexa, and then track Alexa’s output to make Furby move.”

What you’ll need:

Harrowing

Running Alexa

The Raspberry Pi is running Alexa Voice Service (AVS) to provide full Amazon Echo functionality. Amazon AVS doesn’t officially support the tiny Raspberry Pi Zero, so lots of hacking was required. Point 10 on Zach’s original project walkthrough explains how to get AVS working with the Pimoroni Speaker pHAT.

Animating Furby

A small motor driver board is connected to the Raspberry Pi’s GPIO pins, and controls Furby’s original DC motor and gearbox: when Alexa speaks, so does Furby. The Raspberry Pi Zero can’t supply enough juice to power the motor, so instead, it’s powered by Furby’s original battery pack.

Software

There are three key pieces of software that make Furlexa possible:

  1. Amazon Alexa on Raspberry Pi – there are tonnes of tutorials showing you how to get Amazon Alexa up and running on your Raspberry Pi. Try this one on instructables.
  2. A script to control Furby’s motor howchooer Tyler wrote the Python script that Zach is using to drive the motor, and you can copy and paste it from Zach’s howchoo walkthrough.
  3. A script that detects when Alexa is speaking and calls the motor program – Furby detects when Alexa is speaking by monitoring the contents of a file whose contents change when audio is being output. Zach has written a separate guide for driving a DC motor based on Linux sound output.
Teeny tiny living space

The real challenge was cramming the Raspberry Pi Zero plus the Speaker pHAT, the motor controller board, and all the wiring back inside Furby, where space is at a premium. Soldering wires directly to the GPIO saved a bit of room, and foam tape holds everything above together nice and tightly. It’s a squeeze!

Zach is a maker extraordinaire, so check out his projects page on howchoo.

The post Raspberry Pi + Furby = ‘Furlexa’ voice assistant appeared first on Raspberry Pi.

Coding for kids and parents with Digital Making at Home

Through Digital Making at Home, we invite your and your kids all over the world to code and make along with us and our new videos every week.

Since March, we’ve created over 20 weeks’ worth of themed code-along videos for families to have fun with and learn at home. Here are some of our favourite themes — get coding with us today!

A mother and child coding at home

If you’ve never coded before…

Follow along with our code-along video released this week and make a digital stress ball with us. In the video, we’ve got 6-year-old Noah trying out coding for the first time!

Code fun video games

Creating your own video games is a super fun, creative way to start coding and learn what it’s all about.

Check out our code-along videos and projects where we show you:

A joystick on a desktop

Build something cool with your Raspberry Pi

If you have a Raspberry Pi computer at home, then get it ready! We’ve got make-along videos showing you:

Top down look of a simple Raspberry Pi robot buggy

Become a digital artist

Digital making isn’t all about video games and robots! You can use it to create truly artistic projects as well. So come and explore with us as we show you:

Lots more for you to discover

You’ll find many more code-along videos and projects on the rpf.io/home page. Where do you want your digital making journey to take you?

The post Coding for kids and parents with Digital Making at Home appeared first on Raspberry Pi.

Beginners’ coding for kids with Digital Making at Home

Have your kids never coded before? Then out Digital Making at Home video this week is perfect for you to get them started.

A girl doing digital making on a tablet

In our free code-along video this week, six-year-old Noah codes his first Scratch project guided by Marc from our team. The project is a digital stress ball, because our theme for September is wellness and looking after ourselves.

Follow our beginners’ code-along video now!

Through Digital Making at Home, we invite parents and kids all over the world to code and make along with us and our new videos and live stream every week.

Our live stream will take place on Wednesday 5.30pm BST / 12.30pm EDT / 10.00pm IST at rpf.io/home. Let your kids join in so they can progress to the next stage of learning to code with Scratch!

The post Beginners’ coding for kids with Digital Making at Home appeared first on Raspberry Pi.

Self-driving trash can controlled by Raspberry Pi

YouTuber extraordinaire Ahad Cove HATES taking out the rubbish, so he decided to hack a rubbish bin/trash can – let’s go with trash can from now on – to take itself out to be picked up.

Sounds simple enough? The catch is that Ahad wanted to create an AI that can see when the garbage truck is approaching his house and trigger the garage door to open, then tell the trash can to drive itself out and stop in the right place. This way, Ahad doesn’t need to wake up early enough to spot the truck and manually trigger the trash can to drive itself.

Hardware

The trash can’s original wheels weren’t enough on their own, so Ahad brought in an electronic scooter wheel with a hub motor, powered by a 36V lithium ion battery, to guide and pull them. Check out this part of the video to hear how tricky it was for Ahad to install a braking system using a very strong servo motor.

The new wheel sits at the front of the trash can and drags the original wheels at the back along with

An affordable driver board controls the speed, power, and braking system of the garbage can.

The driver board

Tying everything together is a Raspberry Pi 3B+. Ahad uses one of the GPIO pins on the Raspberry Pi to send the signal to the driver board. He started off the project with a Raspberry Pi Zero W, but found that it was too fiddly to get it to handle the crazy braking power needed to stop the garbage can on his sloped driveway.

The Raspberry Pi Zero W, which ended up getting replaced in an upgrade

Everything is kept together and dry with a plastic snap-close food container Ahad lifted from his wife’s kitchen collection. Ssh, don’t tell.

Software

Ahad uses an object detection machine learning model to spot when the garbage truck passes his house. He handles this part of the project with an Nvidia Jetson Xavier NX board, connected to a webcam positioned to look out of the window watching for garbage trucks.

Object detected!

Opening the garage door

Ahad’s garage door has a wireless internet connection, so he connected the door to an app that communicates with his home assistant device. The app opens the garage door when the webcam and object detection software see the garbage truck turning into his street. All this works with the kit inside the trash can to get it to drive itself out to the end of Ahad’s driveway.

There she goes! (With her homemade paparazzi setup behind her)

Check out the end of Ahad’s YouTube video to see how human error managed to put a comical damper on the maiden voyage of this epic build.

The post Self-driving trash can controlled by Raspberry Pi appeared first on Raspberry Pi.

❌