January 2012 saw the publication in the UK of a Royal Society report entitled ‘Shut down or restart: The way forward for computing in UK schools?’ This discussed the dwindling interest in studying Computing at school, as a result of a curriculum which consists of little more than the acquisition of basic digital literacy skills. The report was followed by a series of articles in the Guardian newspaper interviewing leading UK companies and highlighting their perception of a computer skills shortage among UK graduates (see here) The series criticized UK universities for running a significant number of over-specialized dead- end courses in computer science, with poor prospects of employment for those enrolled on them. Statistics for the class of 2010 showed computer science graduates having the highest unemployment rate of any UK undergraduate degree, at 14.7%
So what is going wrong, and where should we be concentrating our efforts to improve this situation? In the last 50 years we have moved from a position where access to computer hardware was limited, the computational capability of those computers was equally limited, and the ability to make use of that capability was only possible for a few specialists, to a position where almost every aspect of everyday activity involves the use of some form of computational device. We have moved into what has been referred to as the third major phase of computer usage, having passed through the eras of the mainframe and the PC into the realm of information appliances and the myriad applications which they support.
Computers have become ubiquitous, but the issue is how much do individual users need to know about the underlying technology in order to benefit from it? It is probable that the majority of the population will never need to, or indeed be able to, understand the technologies on which the applications they use are based. Perhaps Arthur C. Clarke’s dictum (1973) that ‘any sufficiently advanced technology is indistinguishable from magic’ is applicable in this context. What drives the hardware, the operating systems, the networks, the software, the databases, the video games, the graphics, the music, the social networks, the digital cameras, the mobile phones, is not a concern for most users. All they want to do is use the applications made available to them via all these devices. Users are primarily consumers.
At the other end of the spectrum there is a relatively small number of very technically competent individuals who fully understand the details of the various technologies, and are able to research and develop new aspects of those technologies. Academic computing, which has grown out of mathematics, science and engineering departments, continues to provide the research that underpins all these technologies, but the world in which the technologies are used has changed beyond recognition. Whilst a few computer professionals still have a role as constructors of the technology, there is an ever-increasing need for people who understand the technology sufficiently to be able to develop the applications which the consumers require.
Donald Norman (1998) in his insightful book ‘The Invisible Computer’ commented that ‘Appliances [and applications] are consumer products whereas computers are technology products: therein lies the fundamental difference in the market. Computers emphasise technology, appliances [and applications] emphasise convenience, ease of use; they downplay or even hide the technology. Computers are targeted at technology enthusiasts, even though a larger section of the market is buying them… Alas, most of today’s machines force people to use them in their own terms, terms that are antithetical to the way people work and think. The result is frustration, an increase in the rate of error, and a general turning away from technology. Might schools of computer science start teaching the human-centred approach that is necessary to reverse the trend?’
So how should our computing departments respond to the change in emphasis arising from the increasing dependence of modern society on computing-based products? Surely our role is to ensure that the needs of our consumers are met in the most appropriate way. For this we need to produce computing graduates who can both understand the technology and also exploit it for the benefit of the end-users. In a world increasingly reliant on computer-based applications we need to occupy the centre ground, between the underlying technology base and the ever expanding field of end-user applications. The emphasis has moved from computing technology to information technology and we need to move with it.
Dahlblom and Mathiassen (1997) suggest that ‘the penetration of technology into all aspects of life means that we need to concentrate on artefacts in use rather than on how artefacts work …it is the power of information technology to infiltrate our lives and our minds that places new demands on our profession.’ We must prepare our students to get nearer to the concerns of their users. Traditionally we worked behind the interface, between the constructors and the consumers, but increasingly we need to be working at the interface. We need to produce a generation of computing graduates who can take on a role as creators of applications, and who can also act as the connectorsor communicators between the technical providers and the non-technical end users. Most of these graduates will never need an advanced grasp of mathematical, scientific or engineering fundamentals, but all of them will need an advanced understanding of logic, problem solving, and the design of human-computer interfaces.
Criticisms of computer-based systems and artefacts are often related to issues like unnecessary complexity, poor interface design, inadequate future proofing (leading to continual updating), and failure to appreciate cultural differences between potential user groups. All these criticisms highlight the need for our courses to provide more emphasis on the concerns of the end user. Denning (2001) characterised this as ‘a need for value skills, not just technical skills if we want [our graduates] to be seen as professionals not just low grade technicians.’ Such skills are sometimes characterised as soft skills as opposed to hard skills, or as people skills rather than technical skills but this terminology should not lull us into any false sense of security about the relative ease with which such soft skills can be taught or learned.
Our students must be encouraged to learn to apply algorithmic thinking to harness the power of the computer for the benefit of humanity, to learn to manage complexity, to learn to manage change, and above all to learn to empathise with the majority of the population who are the intended users of every application which they create. Not an easy task, but one which needs to be grasped if we are to give our students an education which delivers the competences they will need in order to pursue computing careers in the twenty first century.
Author: Stanley J Oldfield
Date: February 2012
Clarke, Arthur C. (1973) Profiles of the Future (revised edition)
Dahlblom and Mathiassen (1997) The Future of our Profession: There’s more to being a good engineer than a high level of technical competence, CACM Vol. 40, No. 6.
Denning, P. and Durham, R. (2001) The Core of the Third Wave Professional, CACM Vol. 44, No. 11.
Norman, Donald A. (1998) The Invisible Computer, MIT Press.
Image: David Castillo Dominici / FreeDigitalPhotos.net