11.12.23

Four experts, four opinions: Salford academics do a deep dive into AI

Categories: Salford 360

One thing is for sure in 2023, there has been no escaping Artificial Intelligence (AI) - it's a topic that has dominated the headlines throughout the year and this technological revolution shows no signs of slowing down.

In November the first ever global AI Safety Summit with world leaders took place at Bletchley Park in Milton Keynes, the exact spot where 75 years earlier Alan Turing and his team of codebreakers cracked the Enigma Code, accelerating the end of World War II. It was also the word on everyone’s lips, with AI dominating ‘word of the year’ lists for at Oxford, Cambridge and Merriam-Webster dictionaries. Even last week, the government launched the ‘Manchester Prize’, a £1m competition to solve world's big problems with AI named after the birthplace of modern computers.

This week the University of Salford will be doing a deep dive into AI, with our academics hosting a special talk on the subject as part of the iconic Royal Institution Christmas Lectures. First broadcast in 1936, the Christmas Lectures is the oldest ever science television series. This year’s theme will tackle the most important and rapidly evolving field of science today – AI and Machine Learning. Lectures will take place on our Peel Park campus on Tuesday 12 December and Thursday 14 December at the University of York. Also taking part on Tuesday 12 December, our first GMIoT Student Conference will see Professor Andy Miah, Chair in Science Communication and Future Media, deliver a keynote speech on the future of immersive technology.

In the first of our new series Salford 360, our experts take a holistic look at AI, what in means in their field of research and how it will impact industry and society alike.

Professor Andy Miah¸ Chair of Science Communication and Future Media in the School of Science, Engineering and Environment, looks at the ongoing debate around the ethics of AI and how we can use this technology for good:

"The last 18 months has seen a giant leap in the public use of artificial intelligence, but only a small step in the public discussion about how it is changing our lives. The consequence of this is the beginning of a new technological panic, the ripples of which will continue for the next decade. We’ve already seen the first wave of an AI workers’ revolutions with the highly visible strikes from creative industries' personnel in the USA, which have been helpful in foregrounding how much more we must do to protect peoples’ futures.

We can expect to see more of these strikes, unless we address the skills transition requirements across all sectors.  While some of the most thoughtful innovators are sceptical whether this transition is even possible, what’s clear is that we haven’t properly engaged society on what that future could look like and it’s imperative that this is done quickly. We are still not teaching AI in all subjects at all levels of education and we urgently need to switch into a new gear.

We are at a very challenging place in human history where technology is both our biggest threat and our greater lever to push back against some of the biggest existential challenges facing life on earth. It’s imperative that this is addressed across all societal levels, from schools to our most sophisticated industries.

Our first step must be to embed learning about artificial intelligence across our entire curriculum. When people are inspired to employ technology for good, rather than restrict their use of it for fear of the bad it may generate, a whole range of new perspectives and possibilities emerge, and we need to harness this creative, innovative energy to ensure AI helps humanity evolve, biologically and morally."

Robin Brown, Lecturer in Journalism in the School of Arts, Media and Creative Technology, explains how AI is good news and bad for journalists:

"Across the world journalists are looking at AI and running their very own risk-reward analysis. Is iterative AI – a rolling update of software that improves itself with every new version – an opportunity or a threat? Or both? AI could prove a useful tool in newsrooms, reducing the need for time-consuming legwork: combing data, transcribing interviews. But it poses a significant threat to jobs and many in the industry think journalists are facing an existential crisis.

The industry has been through one cataclysm already thanks to digitisation – and it’s hard not to look at last week’s swingeing job cuts announced by Daily Mirror and Manchester Evening News publisher Reach without pondering whether Artificial Intelligence has a role to play in the company’s third round of redundancies this year. Look at any recruitment website and there are adverts for AI-assisted journalism roles – and plenty more for candidates to train AI software in writing.

By its very nature AI learns from existing material and produces something that, in theory, looks similar. But the AI-generated journalism I’ve seen so far has been bland, generic – word soup that doesn't really mean anything. Publishers including CNET and MSN have committed embarrassing mistakes by employing AI without sufficient oversight. Clearly the current generation is not up to scratch; what comes next will be significantly better – and quickly.

The threat to people working in journalism is, as I see it, to those who edit or repackage existing content. AI will get significantly better at taking existing raw material and parcelling it up into viable, if not flashy or creative, content. How long until it will be able to write better headlines, improve SEO and readability, edit the TV or audio packages, lay out a page or parcel up assets into a social media video? A short few years is my guess, and I suspect news publishers such as Reach are thinking the same thing.

Are we doomed then? I think not. It’s hard to see any near-future scenario when AI can go and interview someone living in a mould-infested council house, follow a paper-trail to uncover a fraud, nurture sources. How will it spot a story? How will it discern fact from fake news? Unearth important stories that have serious implications for wider society? These are the soft skills of journalism – skills that can only be built up with time, expertise, insight and old-fashioned shoe-leather: walking the streets, meeting people, getting a feel for a patch.

Some local publishers, such as the Manchester Mill and its offshoots, have looked to embrace these old-fashioned virtues and seen healthy growth and engagement. Others have chosen a click-led strategy. AI is coming for one set of those jobs but, for now, it can’t come close to the other."

Danielle Lilley, Lecturer in Policing in the School of Health and Society, asks what AI could do to policing and crime:

"Artificial intelligence (AI) has some obvious benefits – efficiency, elimination of human error and 24/7 availability with no fatigue. Current application of AI in UK policing includes crime prediction tools, harm risk assessments for future offending and facial recognition technology (which can be used at live major policing events). This is said to lead to reduced resourcing pressures, increased public safety and crime prevention. However, academics and civil liberties groups have argued that this also represents state surveillance and infringement of privacy rights on a mass scale.

A key argument is that the machines are being trained with historical data that contains inherent bias. Predictive models can magnify these biases and lead to over-policing of communities already disproportionately affected. For instance, in 2020 The Court of Appeal ruled South Wales' use of the live technology was unlawful, as not enough had been done to check for racial or gender bias in the system algorithms.

AI is also changing crime, and policing will need to understand the technology to keep pace with it. Audio/visual impersonation AI is an emerging threat. Convincing visual deep fakes can imitate famous or real people, allowing the spread of harmful fake news, damaging images or persuading people into handing over money. AI was used in 2019 to trick an energy firm employee into transferring £200,000 from the company account, following a telephone conversation where fraudsters used AI to replicate a colleague’s voice.

Currently, experts warn that the pace of expansion and adoption of AI technology is outstripping the legislative controls in place to minimise harm. For instance, terrorism legislation references "encouraging other people". Training a chatbot to push certain ideologies may not fit this definition.

Training, knowledge and skills gaps need to be bridged. However, used correctly, the technology could provide significant benefits to policing and the prevention and detection of crime."

Dr Maria Kutar, Director of Undergraduate Business Programmes in Salford Business School, looks at AI’s impact on the business world:

"Artificial Intelligence is a term applied to a very wide range of technology tools available to business, ranging from highly specialised deep learning and machine learning applications to easily accessible generative AI technologies such as Chat GPT. Prior to 2023 it was mostly larger businesses adopting AI, with their usage typically incorporating specialist and organisation specific applications such as process automation, speech and image recognition.

However, in this rapidly evolving landscape there are now many applications using AI that businesses can readily access, such as chatbots that can support customer interactions, and tools such as Microsoft Azure which can be used to analyse your existing data to identify patterns and inform decision-making. These are developing quickly to exploit the capabilities of large language models that enable text analysis and generation.

Organisations responding to Greater Manchester Chamber of Commerce’s Quarterly Economic Survey (October 2023) indicated that they are adopting content generation tools such as Chat GPT, as well as speech recognition and machine learning. This indicates a shift from organisations adopting specialist tools that help them exploit their internal data, to the use of AI to support or change their processes, including the increasing adoption of robotic process automation.

The roll-out of AI within widely used enterprise suites such as Microsoft 365 and Google Workspace means that many businesses will be able to utilise AI technologies for some activities. As with all technologies the key for successful adoption is for businesses to be clear about the reason for introducing AI, to align with organisational strategies, and to research the options carefully before committing to a particular solution. With a well-planned roadmap there are huge opportunities for businesses to enhance their use of digital technologies with the incorporation of AI."

We hope you enjoyed the first installment of Salford 360, stay tuned to our channels for more expert takes on topical matters coming soon. 

For all press office enquiries please email communications@salford.ac.uk.