Is the Edge enough to say that a system is private? This was a question from the floor which opened the debate. There are many examples of applications and services that are operating in our daily lives where there are huge concerns. Firstly, we must separate privacy and security concerns. Policy makers need to look at where regulation can enhance individuals privacy and security. To some extent the EU and the UK have better regulations than the other side of the Atlantic. Another panelist noted that it is important to recognise that a lot of data that is collected and held is held by private individuals and won’t be analysed beyond the realms of the company. A lot of people talk broadly about regulation and it’s used as a plaster for everything but nobody really knows how to regulate. She wondered if it would be possible for academics to have radical innovation outside of industry because academics are often bound by the terms of the industry partners that they team up with. Another panelist further expanded on this point to say that from an ethical point of view, the ‘routinisation’ and tickbox way in which terms and conditions of apps and devices are given and receive by its users means there is no valid constant of data gathering. This in turn means that we are feeding a system which is a ‘blackbox’. We need to think about these systems and processes and how this data will be used for, or indeed against us in the future.
On the other side of privately owned Edge technologies and AI software we have ‘Open Source’ which is computer software that is released under a license in which the copyright holder grants users the rights to use its source code to anyone and for any purpose. One attendee asked if this will be strong enough to keep up with the current development, power players and fast moving world of AI. Experts discussed how the boundaries of the Open Source principles may have become blurred with large companies increasingly dominating the space. Historically Open Source has been a very good thing with lots of transparency but there are increasing debate around the powerful growth of large language and visual models and concentration of power.
What would the panel change about the development of edge technologies that would make things better? The ethical responsibility should lie with the developers and designers and not be someone else’s problem. Engineers should be working alongside social scientists during development of technologies. There should be places where the everyday person can go to get help around abuse, privacy and security issues with technology. Unfortunately, regulation has a ‘chilling effect’ on innovation and this is often why you find reluctance but that this does not make it any less important.
Can we find a David Attenborough figure for AI and other Edge Tech who could act as an iconic communications figure for the every day person? Rather than just hyper dark, dystopian TV shows, we need a public figure who can break down the benefits but also the great risks of some of this technology.