Insights

Categories

Is ChatGPT our “Sputnik” moment?

Jun 27, 2023

Written by Kieran Fitzpatrick. This article was originally published in the Summer 2023 Edition of the Communicator.

Predictably, the inaugural edition of Communicator in October 1968 focused on defining technical communication and its practitioners. In particular, one article republished a lecture by Professor Sidney Wilcox, a lecturer in technical communication from Arizona State University (ASU), who was on sabbatical at the Institute of Science and Technology, University of Wales. Wilcox’s audience at ASU were typically engineering students and it was this context that prompted him to ask such probing questions about the nature of technical communication and those who practise it.

Wilcox wondered whether the technical communicator was “a management-oriented professional” or “an hourly paid labourer”? Should their work be recompensed through “a minimum pay scale”? Does the technical communicator have “a code of ethics”? Most pointedly of all, Wilcox asked for reflection on whether “we have developed enough [as an occupation] to prescribe a standard, identifying education?”

We might question why Wilcox was so pre-occupied with these issues. Of course, he had a professional interest in increasing the number of people trained in and competent at practising technical communication. However, there was something greater at stake for him, an amalgam of macro historical forces that piqued his interest in these matters.

On the second page of the article, Wilcox made clear the roots of his professional specialism and his concern with its standing. He wrote,

“The big shift to more literate technical graduates can be traced to a traumatic experience which profoundly affected the whole education of the United States: the loud, clear, and somewhat ominous beep of that first Russian Sputnik.”

He went on to diagnose what for the rest of the century would be known as America’s “Sputnik Moment”, namely that,

“… we had become too complacent, too smug, too confident in our elementary and secondary schools to question whether our educators were doing an effective job in teaching the rank and file of American youth in mathematics, science, and English.”

He pithily reported that, in the vapours of Sputnik’s orbit, Americans “took up the debate of ‘why Johnny can’t read’ and ‘why engineers and scientists can’t write’.”

These debates within American culture amounted to more than a collective panic; they also shaped federal policy-making towards education in the United States for decades to come. Rather than relieving political tensions, Sputnik’s piercing of the Earth’s atmosphere on 4 October 1957 symbolised a raising of the stakes in the stand-off between American capitalism and Soviet communism. Sputnik was an artefact that embodied the USSR’s technological progress, and the United States’ scramble to keep up. One of the ways in which the federal government responded was to draft and pass through Congress the National Defence Education Act (NDEA) on 2 September 1958.

The Act diverted $1 billion of public funding into science and mathematics education in the United States between 1958 and the early sixties, to ensure that its citizens were more technically capable in the disciplines needed to accelerate the development of American science and technology, and to counter the threat posed to American values by Soviet communism. The Act had far-reaching consequences.

In the decades that followed, the NDEA was an instrumental factor in broadening access to college education among women. By the century’s end, it had been key in making American women proportionately greater beneficiaries of university attendance than their male counterparts. Although we should also count the civil rights movements, feminism and Higher Education-specific legislation as other factors to explain these changes, the NDEA was prominent among them. Indeed, it would not be outlandish to view Sputnik, the concurrent Cold War politics, and the extent of American soft power, as key mechanisms for the spread of two pervasive ethical concepts of our time. The first is that access to higher education is a marker of a society’s continued civilisation. The second, that ‘STEM’ (Science; Technology; Engineering; Mathematics) disciplines represent the toolkit by which that civilisation will be built.

More broadly, what Wilcox’s article demonstrates is how macro historical forces change different forms of work and their meanings within a broader culture. This dynamic is a central aspect of modern history. In the wake of the Holocaust, two Jewish lawyers, Hersch Lauterpacht and Rafael Lemkin, used their shared experience of dislocation and the destruction of their extended families to devise new, competing terms – ‘crimes against humanity’ and ‘genocide’ – to restrict via international law, the state’s ability to treat those they governed however it chose.

These are markedly different manifestations of the same phenomenon. However, for us information developers, instructional designers and technical writers working sixty years on from Wilcox’s article, we must consider what it means to communicate about – and increasingly with – technology. This is particularly true given the recent rapid developments in AI-powered chatbots such as ChatGPT, Bard and Bing.

In recent months, the Internet has been awash with stories and editorials about the immediate ethical impact of chatbots, and what they may portend for humanity in the coming years. Will ChatGPT, for example, render countless millions of people obsolete in their current occupations and, if so, how are they to otherwise spend their time? How fast should these technologies be allowed to develop, and in the interests of humans over machines, what are the appropriate regulatory structures to govern them? To what extent do chatbots and AI more generally represent the framing of ethical issues in particular contexts, and how much do they therefore prompt reflection on the decisions that we make while using digital technologies?

Key figures in setting the terms of these debates are revealing of socio-economic and political trends in recent decades. Whereas Sputnik was given meaning by its status as an artefact of the Cold War, and ultimately a stand-off between nation-states and their respective spheres of influence, debates around chatbots are largely being framed by technology billionaires. As reported recently by journalist Max Fisher in his book The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World, technology billionaires have long since presided over a sector that showed little concern for governance structures in restraining both the computing power and social impact of the technologies that earned them substantial profits. The results have included harassment, the destruction of property, and mass murder around the world, most prominently in countries in the Global South, such as Myanmar and Sri Lanka. This begs the question: if we can agree that chatbots, and Artificial Intelligence more generally, require better standards of governance, why should those standards be devised by those who have presided over their absence for so long?

We as technical communicators should take seriously the task of providing alternative perspectives to the ethics and governance of Artificial Intelligence in the coming decades. On a daily basis we are ‘ghosts in the machine’ in the technology sector, moving between Subject Matter Experts (SMEs) and their technologies on the one hand, and translating the knowledge those SMEs produce for end users on the other. As a result, our sense of what counts as the ‘good’ governance of technology is derived from daily routines that have us observing not only technologies and how they are developed, but also the occupational cultures driving their development. Therefore, one answer to Wilcox’s, “Do technical communicators have a code of ethics?” is yes, and it is rooted in lived experience, not academic or ideological principles.

A second answer to that question turns on the changing demographics of workers in technical communication, and how they are both translators for SMEs, as well as being SMEs in their own right.

With greater frequency it is now common to come across information developers, instructional designers and technical writers who have advanced degrees in the humanities and social sciences. I myself have a doctorate in the social history of science, medicine and technology from the University of Oxford, and a number of my colleagues at TWi in Cork, Ireland earned equivalents in other subjects before transitioning into this sector. Although at time of writing I cannot find any representative data to support viewing this as an industry-wide trend, it is reasonable to suggest that information development is increasingly seen as a viable opportunity for those with advanced degrees, who are looking for better pay and conditions than can be found in Higher Education.

What this represents is an opportunity to couple information development with the skillsets of those who have deep knowledge and understanding of human societies, how these change over time, and the ways in which value systems can be designed to reduce harm. While this affirms Wilcox’s question about the ethical codes of a technical communicator, it rejects another of his questions, concerning whether we should have a standardised system of education. Currently, the technical communication sector’s strength lies in the diversity of our skills and experience as they relate to understanding and modifying relationships between humans and machines.

In practical terms, it might be that companies specialising in information development and technical communication are increasingly populated by workers with deep knowledge of law, anthropology, history and literature, among other disciplines. Just as Wilcox’s article in Communicator proposed a numerical broadening of those who were well-trained in technical communications, I think we are currently at the beginning of a different type of broadening in what it means to develop information, one that is perhaps more qualitative than quantitative in nature. In the coming years, the ability for our profession to navigate complex ethical and legal issues surrounding Artificial Intelligence and its applications will become central to our commercial viability and reputational standing. This will require us, of course, to manage the day-to-day challenges of information development projects. However, we will also need to weigh the significance of ethical decisions made in relation to new technologies, alongside the commercial realities of doing business, all whilst being cognisant of the safety and well-being of end users.

We might think of this as a form of agreement between ourselves, those we work for, and those for whom we produce our content. On this score, Wilcox was again relevant, as he concluded his Communicator article with the following political observation: “The scientific world and the man on the street do not speak the same language, and they never will. But in a democracy they must communicate with each other. The important job of work along this interface is to be done by the career communicators.” The whiff of elitism in the distinction Wilcox made between “the man on the street” and the “scientific world” makes me pause for thought, but that perhaps is a subject for another time. However, I assent to the notion that the relationship between science and society should be one rooted in democracy, and maintained by information developers such as ourselves. In this coming age of automation and the ethical challenges it will pose, we must ensure the tools and methods we use to maintain that relationship are fit for purpose.

 

Share This

Share This

Share this post with your friends!