I feel that I need to have some position on AI, but I also feel out of my depth in approaching it. I am not a user of social media and can see no personal use for generative AI. Hence, I would never claim to understand it as a technology, or how it works. I am, however, fascinated by the impact it might have on education and on society, while recognising that it is more likely to be the unintended consequences that are important, rather than anything we might predict. However, here are some initial thoughts based on a couple of recent comments.

A recent edition of The Times Higher Education had two articles on AI that spoke to me. There has been a great deal on the recent advances in generative AI and ChatGPT, about whether it will be dangerous or beneficial to teaching and/or research within the university, with some more marginal work on its potential contribution to marketing, student support and other areas of university life.
The first article was by Nick Jennings, the Vice Chancellor of Loughborough, who has a background in AI, who argued for the benefits of generative AI, without neglecting the dangers. This is something that is coming, whether we like it or not. It does have significant advantages and many potential uses across the institution, including in teaching and research. We must not be blind to the dangers, but we must, essentially, go with the flow, work with this new technology and, above all, make the technology work for us. There will be some inevitable changes in practice, and in work patterns, as well as in the way we approach learning and teaching, but that could be no bad thing.
The second article was somewhat different. This was essentially about the role of ‘awe’ in research. The author, Andy Tix, a psychologist, highlighted the sense of awe, the childlike sense of wonder, that is often associated with research, especially at moments of breakthrough and the development of new and original results or applications. This sense of awe, the author suggests, is core to the research process and is something that AI, as such, cannot replicate. We can have a sense of awe at what generative AI might produce, but the AI system itself cannot replicate that sense of awe for itself. It is simply not built that way, at least for now. Generative AI can develop originality, and based on probability and comparison with existing models, it can identify when something is original. This is true in research as it is in art, poetry or any other field. Originality is simply the process of doing something different, doing something that has never been done before, and that can be measured, processed and hence identified by the machine.
What it cannot do, so easily, is to say whether that original thing is of any value or not. If there is a mechanical way of measuring beauty, a formula to be followed, then it might be able to predict whether the original artwork would be considered ‘beautiful’, but it can never be sure. Likewise in research, it might be possible to predict that an original outcome is interesting or useful, but it can never be sure. What it cannot, perhaps, do is to feel awe, that sense of ‘wow’ that real breakthroughs inevitably produce.
When I am teaching research methods, and particularly ethnographic methods, to PhD students one of the factors that I always put forward as essential to ethnography is the idea of ‘surprise’. Ethnography, done properly, when resources allow, is a long-term project, an immersion in another society or social group over many weeks, months, or even years. The question is often asked as to what this kind of long-term immersion can achieve that surveys, observation and a more superficial engagement with the society, through for example short term case studies or even investigative journalism, cannot also see. If we are relatively familiar with the society, or do a significant amount of background reading, then it sometimes does feel as though there is little left to learn. However, I would always maintain that there comes a point in ethnographic research where the immersion in the society or culture is such that the researcher suddenly becomes surprised; they discover something that they were not expecting. That is the point of revelation, perhaps the point of awe. That is the point, I would suggest, when we can begin to take knowledge forward and propose a new, and original, approach that does say something significant, and occasionally something ground-breaking that changes the way we see society (or ritual, or religion, or whatever the topic in question might be) in an entirely new way.
If generative AI has difficulty with awe, it cannot, I would suggest, be surprised, particularly in the way that I suggesting in relation to ethnographic research. Again, as with originality, it can note and identify something that has not emerged in the data before. It is more difficult for it to identify that thing as significant and its implications as ground-breaking. That process of noting, being captured by, sitting in awe of, that surprising piece of data, and of being able to see within it, or beyond it, the possibility of an entirely new way of thinking, is something that, for now, is entirely human.
Andy Tix, in the THE paper, told of a time when, as a young biology student, he was taken on fieldwork and discovered in that experience a sense of awe at what was around him. He argued that this was the core feature of learning and should sit at the heart of our education system. I couldn’t agree more, and that perhaps takes us back to the other paper, by Nick Jennings. Much of our education is still very routine, about getting across the facts, or how to interpret the facts, or about learning skills and how to apply them. All that can be done by generative AI, much faster and generally much better, than we could ever achieve. If, however, we genuinely design our programmes around learning as awe, or as surprise, and teach our students what to do next, in order to move from the raw sense of awe to the possibility of a new way of thinking (we may perhaps have to work on this) then that would radically change the way we teach, and would continue to place us one step ahead of generative AI, at least for now.