Web Hosting – à la Hercules (part II)


In part I of this post (written, ahem, two and half years ago) I described the decision-making process that led me to the fateful choice to create my very own bespoke chat room, and host it on my very own server within the vaults of the University of Nottingham.  By doing this I could keep control of my participants’ data throughout the whole process, and I’d also (he thought optimistically) be able to have some control of features of the platform, tailoring it to suit my research needs.

So began the epic struggle.

Two initial areas to investigate: how to make a chat room, and how to host it.  For the latter I contacted the University IT department.  I had an initial meeting to discuss options ranging from the very basic to the bells-and-whistles.  A Virtual Machine seemed like the best bet – basically a slice of a University server, a computer always on and always online, given over to my administration.  I then had a follow-up meeting with another chap to discuss my exact requirements.  This was difficult, as I was very vague as to what these were likely to be.  I didn’t understand many of the questions.

What is the requested OS for the VM (Windows, CentOS (Linux) or other)?

Preferred hostname (Please use the server naming convention e.g. UiWapMED01)?


(My original answer to that second question was: “Do I have to specify this?  Is there a ‘default’ option?  I don’t mind!”).

I was suffering from terrible imposter syndrome throughout all this.  In fact no, I just was an imposter.  I was worried that one of these IT-type people might suddenly catch on and say “Hang on!  We’re not giving you one of our precious University computers, you know absolutely fudge all about any of this!”  So I didn’t ask the questions I should have (assuming I even knew what they were), and I nodded reassuringly when I shouldn’t have.

In the discussion it emerged that, as a PhD student, I already had a chunk of personal webspace on a University server, and the IT chap wondered whether it might be possible to do what I wanted using my personal University web space.  Spoiler alert – no.  But this seemed like an easier option to explore before delving into the arcania of trying to run my own web server, so I nodded reassuringly, we agreed to call off the search, and I’d look into that first, and be in touch again if need be.

It did, indeed, need be, but back to that in a moment.  I now turned my attention back to the former issue.  How does one just ‘make’ a chat room?  A computer science-y colleague put me on to nodejs and socket.io, both javaScript-related programming tools that enable lightweight, realtime chat platforms to be created.  I’d spent some time learning a bit of javaScript with codecademy, and by copying and pasting bits of code from online tutorials I did indeed manage to construct a chat app that worked locally, on my own computer.  I began to feel like I had almost cracked it, if I could just somehow push all of this online (possibly using one of those little t-shaped sticks like a casino croupier).

However, in the light of the scraps of knowledge I was starting to draw together on the subjects of javaScript and web development, I began to remember and understand something the IT chap had said about my personal webspace – server-side javaScript is not supported.  All this nodejs business works exactly like that – running javaScript code on the server (where a website lives) rather than, or in addition to, the more conventional practice of running on the client side (ie on the computer of the person now accessing the website).  So no, my personal webspace was not going to do the trick.

Eventually, despairing of the whole business (not for the last time), I contacted the directors of my PhD programme, wondering if there was any official channels of support for this sort of thing.  It is, after all, a Digital Economy Centre for Doctoral Training, and we did modules with names like Enabling Technologies.  The programme director arranged a meeting with one of the web developers at the Research Institute connected to the CDT.  Javid was extremely kind and helpful, did not imply that I was an idiot who had no business trying to do any of this and would probably end up crashing the entire University network, and instead suggested that I look at pre-existing chat rooms freely available online, if you know where to look.  So he did a bit of looking, and that’s how I came across AJAX-Chat.

There it was on github (an online repository of code), a fully functioning chat room that someone had just made, and then made available to anyone to use, for no personal gain.  It reminded me that the internet as we know it is founded on this kind of not-for-profit knowledge sharing, but this system only works when everyone is an expert.  The attempt to bring this stuff-for-free ethos to non-experts who can’t reciprocally contribute leads to Google and Facebook, services offered apparently for free but very much for profit, and with enormous hidden costs of surveillance and mind-control on top.

Anyway, Javid also put me on to XAMPP, a pre-packaged production environment that is necessary to install on a bare-bones server in order to host web applications.  More generically it’s called a LAMP stack: L for Linux, the operating system that underlies it, A for Apache, a piece of software that organises your server’s communication with the internet, M for MySQL, a database programme that stores and handles data passing through your website, and P for PHP, a programming language that sits on web pages and does stuff.  Important stuff.  All of these are necessary for the workings of AJAX-Chat (and many other web applications besides) and the nice people at XAMPP put them together in one freely available package, that can be installed comparatively easily.  As a final act of kindness, Javid put me in touch with a senior colleague, a chap called Dominic with whom I had worked on a previous project, who is an administrator for the University’s Microsoft-hosted research servers.  He offered to make me a virtual machine from this resource, on which I could get everything set up and get a better understanding of my requirements before getting back to IT with a request for a Uni VM.

Buzzing on an adrenaline rush of pure progress, I started work on this new puzzle.  Dominic set me up with what I thought of as my “test-bed” VM, running the Linux-based operating system Ubuntu.  Other than the OS, it was a blank slate.  Its actual physical whereabouts as a metal box of flashing lights and whirring fans was, of course, completely unknown to me, and all my communication with it was done remotely, using the command line text interface.  Fortunately I had been learning how to use this as part of my IT skills self-improvement drive, and was able to figure out the relevant commands to connect securely to the VM and then install XAMPP.  Again, this carefree sentence glosses over the technical hitches encountered every step of the way, such as when the XAMPP installation initially ‘killed’, with an opaque error code.  I traced the code and found the problem to be a lack of swap space, which I was able to fix with the following commands:

  1. sudo dd if=/dev/zero of=swapfile bs=1024 count=2000000
  2. sudo mkswap -fswapfile
  3. sudo swapon swapfile

That’s quite a good example of the endless loop of problem solving needed every step of the way.

With that all sorted, and XAMPP successfully installed, I turned my attention to AJAX chat.  This had to be cloned from GitHub (after changing folder permissions of course), and then set up.  The database needed to be configured (cue crash course in mysql-speak), and then channels and users added to the relevant php files (a different channel for each focus group, a different user for each participant).  Fortunately AJAX came with an excellent readme file, so this was all largely straightforward.

A certain amount of wrinkle smoothing later….

It worked!  Success!  Or at least, interim success!

Now to do it for real, on a UoN server.  I went back to IT, requested and got my University-based VM.  And started the process again.  I had kept meticulous notes of every step in the initial process, so had a step-by-step guide to follow.  Unfortunately, in my rush of victorious pride I neglected to keep such detailed notes during this next part, so while memory tells me there were a number of issues thrown up by differences between the testbed VM and the UoN VM, memory has also suppressed the details of these problems as a defence mechanism, and I am unable to go into them.

Not to worry though, the real kick in the Bolingbrokes was yet to come.

I finally got the whole thing set up as before, this time on the UoN VM.  Tested it in my office on campus – success!  Went home to show it to my almost entirely disinterested wife – error!  Despair and head scratching, though the problem is probably glaringly obvious.  To access the chat room from outside the University network, as all my participants would need to, required safe passage through the University firewall.  I needed to make a request to IT for a firewall change to allow traffic to my site (chat research.nottingham.ac.uk – no longer functioning).  This involved two procedures: I needed to obtain and enable an SSL security certificate, and IT needed to run a diagnostic scan of my site to identify any potential security weaknesses.  The scan duly came back with the following results:

Screenshot 2020-02-24 at 10.08.14

Once again (and still not for the last time), I completely despaired at this point of ever being able to get this all figured out, and all these ‘multiple vulnerabilities’ resolved.  Still, pulling myself together I was able to see from the results that the versions of php and OpenSSL were outdated.  These were both integrated in the XAMPP package, so with the whiff of a solution I went back to the XAMPP installation website to find a more recent version.

There was no more recent version.

Thing is, weaving together the various components of XAMPP into a single installation is a complex business that takes time.  And it’s freeware, so presumably the people doing it have other calls on their time.  So, great though it is, the components of XAMPP had already been superseded – newer releases of PHP and OpenSSL were available, but not as a single convenient package.

So.  No XAMPP then.  Everything I had done to get XAMPP working and to install AJAX on top was unusable.  And the only way around it was to start from scratch, and to manually, and separately, install the latest versions of Apache, MySQL, PHP, OpenSSL, and finally AJAX again.


It turned out I couldn’t even use the standard apt-get command to install the most recent version of OpenSSL, as it wasn’t even in Linux’s APT library yet.  However, with the good people of the Internet coming to my rescue once again, I was able to find instructions to do everything I needed, here and here.  IT provided me with an SSL certificate, which I figured out how to install and configure, using these instructions.  Here’s another taste of the constant wrinkle-smoothing, taken from my notes:

“Although slightly confused as there wasn’t a .conf file in /etc/apache2/sites-enabled in the name of my domain, only a default file.  The above instructions (and those on other websites) instructed me to run sudo a2ensite chatresearch.nottingham.ac.uk if the file wasn’t there, but that returned an error saying the site doesn’t exist.  Eventually figured out that the a2ensite command enables (as opposed to a2dissite) sites whose config files are present in /etc/apache2/sites-available.  Checking in there I noticed there was a default-ssl.conf file, so I enabled that and used it as the configuration file.  It may be that this is not ok, however.

Then after figuring out how to use Filezilla to connect to my server (using sftp through port 22, although I think I could have left the port blank as that’s the default stfp port), I transferred the certificate files from my computer to the server, then put them all in /etc/ssl/certs.  Then I edited the file default-ssl.conf to uncomment the relevant lines and change the details to those of my certificate files.  Finally I stopped and started apache.”




My very own, University-hosted chat room, complete with channels, usernames and password-protected logins, chat-log exporting (by querying the MySQL database underpinning the whole thing), emoticons, text gestures (e.g. ‘username waves’), ‘username writing…’ functionality, even customisable colour schemes!  And even better, it all looked awesomely retro.  And even, even better, it worked, it was secure, and it allowed me to do my focus group research.

A truly Herculean labour, if I do say so myself, a gauntlet-run of mind-bending, brain-hurting problem-solving.  And an enormous sense of achievement.

And yes, probably what GCSE IT students do.

Just let me have this one, ok?

Viva la viva!

-methode-times-prod-web-bin-16312b76-c652-11e8-a4a5-a34bea2c1d04So after four years of intense study, data collection, analysis, and general head-scratching, my PhD reached and (spoiler alert) cleared its final hurdle: the Dreaded Viva.

Just me, standing alone, locked in a ferocious battle of wits with two pillars of the academic community whose job was to expose my so-called thesis for the deeply flawed sham I suspected it to be, and thereby to repel my ill-judged and presumptuous assault upon the hallowed halls of academia.

Or to put it another way:

An extremely positive and validating experience, a chance to discuss my research as equals with two senior academics, and an opportunity to receive objective feedback on the thesis from a fresh pair of eyes.

It was a strange feeling, walking to the office of my supervisor Professor Svenja Adolphs beforehand, in the knowledge that four years’ work had funnelled down to this one, crux afternoon.  I felt broadly prepared, but it’s difficult to anticipate both the potential lines of questioning and the level of detail that the examiners will want to probe.

One specific question I had tried to prepare for was a request to summarise my thesis.  I had read and heard from various sources that this is a common opening question designed to put the ‘vivand’ at their ease – but it was one I was not looking forward to at all.  On several occasions during the training elements of my PhD programme I had been required to explain my thesis in an ‘elevator pitch’ style, or in 5-minute, 1-minute, 10-second summaries, it was something I always found very difficult.  I was extremely pleased, then, when my internal examiner Professor Louise Mullany got the ball rolling by asking me how I came to be interested in the area that I explored in my thesis.  This certainly did help to put me at my ease, and I was able to describe the personal experiences that had led me to my PhD research, including the birth of my twin daughters in Spain!

From then on, Professor Mullany and external examiner Dr Martin Lamb went through the thesis chapter by chapter.  Questions were challenging and probing, and in one case they identified an issue of wording which I conceded was unsatisfactory and needed changing.  Not too much rewriting though fortunately!  The interview lasted for about an hour and half, but the time really flew by.  After it was finished I went back to my supervisor’s office to wait while my examiners deliberated.  Then myself Professor Adolphs, and my second supervisor Dr Alex Lang were invited back into Professor Mullany’s office to be told… congratulations!  PhD to be awarded subject to minor corrections to be made to the thesis within one month!

A fantastic feeling after an intense day.  Cue the popping of corks!

In the week since my viva I have had a renewed enthusiasm for my PhD research, and a new sense of self-belief that has galvanised me into teasing journal articles out of the thesis.  This was something I was trying to do prior to the viva, but felt rather paralysed by a lack of confidence in the quality of my research.  Having successfully dealt with the viva, I feel ‘unblocked’, and ready to get back to work.

Next up: journal articles, postdoctoral research proposals, and job applications!

Web Hosting – à la Hercules (part I)


I built a chat room for a PhD research project.

Any write up of this in the PhD Thesis itself will never be able to do justice to the drawn-out, agonising, Herculean labour that this simple, whimsical statement turned out to involve, so I thought I’d do that here instead!

So my PhD involves studying the experiences of people using English as a second language in text-based online communication.  Chat rooms, blogs, comments threads, Instant Messaging, social media, even emails, you name it.  In order to do this, I thought a good first step would be to talk to these people and ask them about their experiences, bringing them together in small focus groups.

Then I thought: why not do it online?  Why not use an online chat room to talk about online communication?  Massive benefits of this include the fact that, as opposed to face to face focus groups, there is no need to convert video recording to text transcript (for which you can expect to spend four hours for each hour of recorded material), and that participants can take part from wherever they have an internet connected computer, meaning that a) they don’t have to travel to the study and b) they can be located pretty much anywhere in the world, multi-time-zone scheduling permitting.

Next question: what platform to use?  I considered a number of possibilities.  WhatsApp looked like a good option for several reasons: it gives you the option to email to yourself the chat log of any conversation you’re having, providing an instant transcript of the conversation, plus conversations are encrypted as they wing their way through the wires of the web.  But you need people’s phone numbers, which I thought was a bit unnecessarily invasive.  Other obvious contenders like Facebook Messenger lacked option to download the chat (technically possible, but it involves downloading your entire Facebook dataset, which seemed a bit cumbersome.  Ditto with Google Hangouts).  For research purposes, however, a major flaw of all commercial and web-based options was that you can’t be sure where your data is going, where it is being stored, and what is being done with it.

Ethics is a pretty big deal in academic research these days, and in anything to do with internet research, data security is an import ethical consideration.  To satisfy ethics review boards you need to show you’ve given thought to the whereabouts of data collected from research participants at all times – that’s the physical location, so if it’s “in the Cloud” you need to know exactly what that means, what company’s particular subsection of “the Cloud” that is, where their actual computers are on which they store your data, and what laws apply to data storage in that location.

So, reasoned my treacherous brain, we can avoid all of these problematic ethical issues and ensure that the platform has all necessary features like exportable transcripts etc. if we simply… do it ourselves.  You know, make a chat room and host it on a University of Nottingham server.

As I look back on this last sentence I’m not sure what on earth I thought I was thinking.

I should mention at this point that I have no background at all in computer science.  I did a GCSE in IT in 1998.  I had spent some time over the first year of my PhD learning the basics of programming using the excellent codecademy and had successfully learned enough Python to process Twitter data, and this experience had shown me that with perseverance and a great deal of searching on programming advice forums (stackoverflow, w3schools, digital ocean, many others…), it’s possible to figure stuff out and do things with computers that initially seem impossible.  So I thought I should be able to do this.

And I was right.  Just.  But every step of the way threw up new and seemingly intractable challenges that not only required the finding of a solution, but often the hasty acquisition and understanding of some fundamental aspect of computer science and web science.  Sometimes it took hours of painstaking online research just to understand what question I needed to ask.

Possibly it was not the most productive use of my time.  But I’ve finally arrived at a point now where I have a cast-iron online platform on which I can do some (hopefully) interesting and (definitely) ethically-sound research.

In part II of this post I’ll describe the process, partly as an aide-mémoire for myself (lest we forget…) and partly in the hope that it might be of some use to anyone equally daft enough to try doing it for themselves.  It will be full of dreadfully dull technical details, but hopefully I can explain it in such a way as to make it accessible to non-CS people like me.

Call For Participants!



Due to a fantastic initial response to this call I am no longer looking for further participants.  If you have come here interested in taking part, then I’m very grateful for your interest in my research, but I will not be running any more chat sessions.

Are you a non-native speaker of English?  Do you use English as a second or foreign language to communicate with people online, or through any form of electronic technology?  If so, then we’d like to talk to you about your experience.  As part of a research partnership between Cambridge University Press and the University of Nottingham we want to talk to people like you, in order to understand how English language teaching can help people communicate online.

Why?  We want to find out:

  • What kinds of electronic communication you use – email, chat rooms, instant messaging apps, online community forums… pretty much anything really, the only limit is that we’re interested in text communication – typed on a keyboard or on a phone – rather than video or audio communication.
  • What you use these things for, which ones you like or dislike, and which ones you find easier or more difficult to use.
  • Whether you have any particular problems communicating with other people using these methods of communication.

What do we want you to do?

We’ll ask you to participate in an online text-chat session with a researcher and three or four other people like you.  The chat will last for about an hour, although the whole process will take an hour and a half of your time.  We’ll ask you to talk about the things listed above, then fill in an online questionnaire.  The whole process will be online, so you don’t have to travel anywhere and can take part from your home computer or anywhere you want.  We only ask that you participate using a full computer keyboard, rather than just a mobile phone, so that everyone can contribute equally.

Mapping the University’s Social Media Footprint – A Practice-Led Project

Between January and April 2016 I undertook a study of the use of Twitter for public engagement among members of the University of Nottingham staff.  The project was run under the auspices of CaSMa – Citizen-Centric Approaches to Social Media Analysis – a research team at the Horizon Digital Economy Research Institute that explores methods of performing social media research that respect the rights to privacy and ownership of personal data of social media users.  Consenting study participants provided data from their Twitter feed by exporting it from a web tool that was designed primarily to allow users to monitor and manage their Twitter activity.  Using graph visualisation software Gephi I created an image of the network of interactions created by the tweeting, retweeting, quoting, mentioning, favouriting and following events in the data, and performed an analysis of hashtags propagation to look for signs of successful public engagement.


It was a challenging project.  Designing the data collection in line with CaSMa’s citizen-centric ethos required meeting with each participant in person (and consequently much to-ing and fro-ing between the University of Nottingham’s various campuses and partner organisations), talking them through the web tool and the data collection process, and obtaining their written consent to analyse their data.  Once the collection procedure was in place, I had to work out what to do with the data: it was delivered to me in json format text files, and in order to be able to render complete data sets much sorting and parsing of the data structures was needed.  The text files presented a series of events: tweets, retweets, favouriting, following, etc.  I needed to find, for example, where in the hierarchical text structure I could find the ID of the initiator of a particular event – the person who had written a tweet, or like a tweet, or retweeted a tweet.  In the latter two cases I also wanted to know the ID of the person whose tweet had been liked or retweeted.  This information was not nested in exactly the same place in every event type, and a considerable amount of time was spent establishing the necessary paths within each event type.  Once this was established, I used Python to retrieve the data and compile it into uniform data sets.  I had not previously done any programming, and getting to grips with the language was a real learning experience.

The code compiled primary user, secondary user, and mentioned user data, and with this I created a network graph visualisation.  The final product looked like this:

final final graph

The layout is determined by mathematical algorithm, and the colours a result of a modularity analysis carried out by the software to identify discrete communities based on interactions.  Unsurprisingly, most of the communities in the image above are centered on my participants, although the blue, purple and black communities subsume more than one individual participant, and not, in all cases, by the conscious design of the users themselves.  Outlying coloured dots that seem to have ‘escaped’ their neighbours represent individuals who bridge two communities (and are consequently located equi-distant between the two).

Combining this approach with an analysis of hashtags suggested that successful uptake of a hashtag-denoted topic or event can be aided by recruiting partners to help spread the message.  However, detecting true public engagement proved challenging.  Due to the data collection method, full profile data were only collected on users tweeting or retweeting, and not from users favouriting or following, resulting in profile data for only 60% of users.  Consequently, it was not possible to perform a robust analysis of users as ‘inside’ or ‘outside’ the academic community, and to what extent the message was reaching a general ‘public’, or circulating around a more specialised audience.  In fact, this consideration raised questions of who constitutes the ‘public’ in public engagement, and whether the concept of a demarcated ‘academia’ is a valid proposition (apologies for all the air-quotes).

Further research could look at finding computational methods to process profile descriptions and produce judgements of the likely affiliation of an individual.  However, this would again raise ethical questions, which are going to become more and more salient in future social media research.

Pathways to STEM


On the 16th of March 2016 I participated in the Pathways to STEM outreach event at the central library in Mansfield.  Around 300 year 10 pupils from schools in the Mansfield area who had shown interest in STEM subjects at school (Science, Technology, Engineering, Mathematics) were invited to come to the library to meet postgraduate students from the University of Nottingham, who brought along examples of, and activities based on, their research.

Two days of training from the UoN Graduate School a few weeks earlier had result in around fifty interested postgrads forming small groups based on shared(ish) research interests, with the aim of creating a short activity for the school pupils to engage with.  At this stage of the process, I initially felt a little discouraged, as my very tenuous links to STEM gave me little common ground with the chemists, physicists, biologists and engineers around me.  I found myself in a group with an agricultural scientist specialising in efficiency in dairy farming, and an engineer working on developing new, non-invasive ways to accurately measure the heart rate of newborn babies.  With such disparate disciplines we opted to share a stand at the event, but develop small activities individually, under the umbrella theme of “What Technology Can Do For Us”.

I found it difficult at first to design a short, fun activity based on my research.  I thought that the most salient application of technology within the areas of Applied Linguistics with which I was most familiar was the use of computerised language corpora to study patterns of language in use.  My initial ideas were to involve the students in the process of corpus creation, perhaps to create a corpus from samples of their own classwork, and then perform some basic analyses of their language use.  However, while this is an intriguing idea, it was too involved for the format of the day.

In the end I settled on the idea of exploring the linguistic phenomenon of collocation.  This is the tendency of certain words to ‘attract’ certain other words, and so co-occur together frequently.  To put it another way, it is the tendency of language users to have ‘go to’ combinations of words which they can pull out and use with minimal mental effort.  The strength of attraction between words varies, but at the high end of the scale are semi-fixed combinations such as ‘torrential rain’ and ‘excruciating pain’.  The development of computerised corpora over the past thirty years or so has facilitated the study of this phenomenon, and the strength of attraction can be quantified using various statistical tests that generate scores; the stronger the connection, the higher the score.

In addition to being a good example of the application of technology in the study of language, I chose to focus on collocation because it is something that all native speakers of a language intuitively understand.  Show any native English speaker a sentence in which the word following ‘torrential’ has been removed, there is a very good chance indeed that they will supply ‘rain’, or possibly ‘downpour’ to fill the gap.  Running with this idea, I thought that I could sell this awareness as a form of mind reading, and the idea for ‘I Can Read Your Mind’ was born.


Using my newly-acquired powers of telepathy to get the students’ attention, I then wanted to explain a little about the use of computerised corpora to study this, and let them try it out for themselves.  I decided to give the students an adjective chosen at random, and ask them to guess the five words that collocate most strongly with it.  Using an online corpus analysis tool I generated a list of these words from the British National Corpus, a 100 million-word collection of text assembled in the 1990s with the intention of creating a representative sample of modern British English.  I wrote a short computer program that compared the students’ five guesses with the top twenty words from the corpus, and scored each guess according to its rank in the top twenty.  Their total scores would then be recorded on a leaderboard, to add a competitive element to the activity.

My team mates prepared great activities.  Shiemaa brought along her cardiac monitor that allowed the students to see their heart rate in real time simply by holding two light-emitting contacts in their fingertips and go home with a print out of the signal, while Emma prepared a fantastic Monopoly-style game in which students took over the running of a dairy farm for a five-year period, applying various technological methods seeing their effects on feed price and milk yield!  By our final preparatory meeting the week before the event, we were confident we had a good stand.

The day went smoothly and enjoyably.  In two ninety-minute sessions, the students circulated around the large events room at the library, stopping at the different stands and experiencing a range of scientific and technological works-in-progress, from 3D printers, to disease prevention, to optimized growing conditions for plants, to DNA Jenga.  My team’s activities went well, and the students were suitably wowed by my mind-reading powers.  I learned several interesting things, notably that my activity was a little unfair, as some words had very obvious collocating nouns (the girls who got the word ‘healthy’, for example, scored very highly with ‘food’, diet’, ‘lifestyle’ etc.), whilst others had a tendency to take rather more obscure and difficult words.  Furthermore, it was very hard to predict, without looking at the wordlist, which adjectives would be challenging, so even when I became aware of the problem and started pre-selecting adjectives (previously I was using an online random word generator), I still couldn’t guarantee non-zero scores.  Still, most teams seemed to enjoy the activity, and there were several real eye-opening moments.  The team of boys who got the adjective ‘nice’ thought they were being jokers when they wrote down the word ‘guy’… but it turned out to be the top word.


The event as a whole was a great success, and the organisers passed on the following feedback from students and teachers:

Teacher emails:

I just want to thank you firstly for a fantastic event today. I saw every single student engage in an activity and interact with the people who were leading the activities. It was really good to see them enjoy themselves and for some stretch themselves a little.

Thank you for yesterday I had a thoroughly enjoyable and informative afternoon as did my colleagues and more importantly my students.

Student comments:

I learnt lots about how science effects our world

I learnt lots and all the events were cool

I could see science displayed in different ways to how it is done in school

It was a great opportunity for me to apply my research interests and really gave me a fresh perspective on my work.