[go: up one dir, main page]

Academia.eduAcademia.edu

Involving the Academic

2003, Management of Education in the Information Age

7 INVOLVING THE ACADEMIC A Test for Effective University ITEM Systems Bill Daveyl and Arthur Tatnall2 1 School of Information Technology, RMIT University, Melbourne, Australia Systems, Victoria University, Melbourne, Australia 2 School of Information Abstract: ITEM systems in the university sector are large. This means they are often purpose-written for an individual university. These systems have significant investment cost when compared with commercial systems. An interesting issue with such systems is the apparent set of perceived stakeholders when measured by functionality of the working system. Initial case studies of three universities in one country showed that existing administrative systems offered little support for teaching purposes. An extended survey over a number of different countries showed few exceptions, and a test was developed to determine if a university ITEM system included the classroom teaching function as a user requirement. The study found few systems catering for even the most trivial of requirements of teaching. Key words: Information technology, university student records systems, academics, stakeholders 1. INTRODUCTION Researchers investigating the use of information technology in educational management (ITEM) often tend to concentrate on the use of information systems in schools. Universities, however, provide an interesting field of study for the ITEM researcher as, opposed to secondary and elementary schools, a university is often large enough to justify a purpose-written administrative system. An individual university needs to store a huge amount of data and is often prepared to spend as much time and money as a sizeable business in designing and producing a system to fulfil its complex administrative needs. The original version of this chapter was revised: The copyright line was incorrect. This has been corrected. The Erratum to this chapter is available at DOI: 10.1007/978-0-387-35689-1_19 I. D. Selwood groups et al. (eds.), Management of Education inthe the Information Age discussion that met throughout conference. © IFIP International Federation for Information Processing 2003 84 Bill Davey and Arthur Tatnall Research at a number of universities has shown that educational administrative systems, and in particular student records systems, often do not provide the simplest of functionality when viewed from the perspective of educational delivery in the classroom. The research reported here implies that the delivery of teaching-related services has been a neglected aspect in the development of administrative systems in universities. In this paper we provide a Litmus Test for determining the focus of a university student records system, and how well it relates to classroom teaching needs. Anecdotal evidence suggests that functions crossing academic boundaries within a university are often completely out of the control of academics who are usually focused within their discipline area. A question that arises for the ITEM researcher in this context is the inclusion of classroom educational specifications within the ITEM systems commonly being produced in universities and whether the picture is cultural, or nationally specific. Our particular concern is with student records systems that could, in many cases, easily provide much more useful teaching information than they currently do. This paper examines the use of university student records systems, but particularly from the viewpoint of the university classroom. It argues that academics, in their teaching role, should be regarded as significant stakeholders in these systems, but notes that often their needs have not been considered. We question how well university administrative systems meet the needs of teaching, and what information university teachers might wish to obtain from such systems, but cannot obtain now. 2. IDENTIFYING SfAKEHOLDERS, CLIENTS AND USER REQUIREMENTS The information systems literature points out that effort spent in the determination of stakeholder and user requirements early in a system's development is crucial to its success. The literature particularly stresses the necessity of involving users in the process of designing information systems (Fuller and William 1994; Lindgaard 1994; Lawrence, Shah and Golder 1997) if we want those systems to be used to their full potential. Lawrence et al. (1997) point to a need to consult with users, While Lindgaard (1994) notes that a large body of research has shown that potential users do not make best use of information systems unless they feel that these systems have been designed with their involvement and in their interest. Both users and clients are stakeholders in the development of any information system, but their needs are not always the same. It is the client who commissions and pays for the development of the system, and the Involving the Academic 85 system will be designed to their specifications. A problem arises, however, when the client is not also the only significant user of the system. In information systems development it is not unusual for a system to fail because, although it was technically well written, it did not meet the needs of its users (Meredith and Mantel 1995). Even a well-written system that does not do what all its users want is a waste of resources. As Post (1999) puts it: "You must thoroughly understand the business needs before you can create a useful system"(p.341). In implying that university student records systems do not meet the needs of all their users, we are not arguing that these systems are a failure. We are arguing, however, that they often do not achieve their full potential in the provision of all the useful information of which they are capable, and to all those people who could make good use of it. Unfortunately, teaching is not always seen as a business need of university student records systems. 3. POST IMPLEMENTATION EVALUATION The field of post implementation review is well researched in a number of knowledge domains. In education and health, writers such as Visscher (1999) and Perrin (2000) have written seminal articles on the value and problems associated with measuring effectiveness against specifications as opposed to using level-of-use as a post implementation review technique.· A common nature of post implementation review concentrates on levels of use, of the program, or of specific functionality of the program. Visscher (1999) proposes ''the higher the perceived system quality, the more the implementation process promotes system use, and the more the features of the SISs match the nature of schools, the more intense the use of SISs is expected to be." (p.172) The argument here that 'if it is good it will be used, if it is used it must be good' helps us to distinguish between systems. It cannot, however, help us with the quality and purpose of a system to the extent that a system is missing features, or is ignoring some of its potential users. In health, several researchers have identified gains to be made when clients or users are consulted directly after implementation (Osher et al. 2001; Shah 2(01). In the health knowledge domain, these viewpoints have been compared, and Lee and Menon (2000) used both parametric and nonparametric analysis of the efficiencies gained by IT investment in hospitals. 86 Bill Davey and Arthur Tatnall Their conclusions were different from other studies in the area. They based their measurements on the proposition that "Efficiency, when measured through post-hoc analysis, tells us how well the final mix of inputs has affected production ... " (p.103) Clearly, even within a model as rigorous as that possible when measuring efficiency, there are disparities of outcome when alternative measurement methods are employed. A paper by Bryce et al. (2000) describes the application of three different models to measure the outcomes of a single system change. The paper concludes that: "This article illustrates that model selection can influence which firms are rated as the most efficient. We therefore cannot simply dismiss the decision as arbitrary." (p.5H) In the hospital setting, Osher et a1. (2001) argue that "Failing to involve family members in the process of framing analysis questions and interpreting results deprives them of the opportunity to ask additional questions of the evaluation data that may improve the overall usefulness of the evaluation". (p.70) The argument proposed by this paper is that it is useful to ask what users need from a system rather than if they are happy with the system presented. At a meeting someone will ask 'is this a convenient time to meet?' Those at the meeting are clearly able to attend at that time. The question should also, of course, be put to interested parties who are not in attendance. In IT systems terms the equivalent is to ask 'are you happy with the performance of the system functions?' What should also be asked, but very seldom is asked, is 'What information do you need to perform your job, and to what extent does the system currently provide that information?' 4. STUDY OF UNIVERSITY SYSTEMS The research reported here commenced with the study of three universities in Victoria, Australia. Anecdotal evidence had indicated a common problem amongst academics that arose from their interactions within the university administrative systems. In initial interviews academics complained about unnecessarily duplicated work. Three examples, common to all three universities, illustrate this type of problem: Examination results were entered by hand on a form generated from a computer printout from the central student records database. Usually, Involving the Academic 87 before transcription, these results were first printed onto paper from the academics' own student record system in an Excel spreadsheet, or something similar. - Students enrolled in courses on a computer system by filling in paper forms. These allowed course lists to be produced, but academics could only obtain a paper copy of the course list. Individual tutorial and workshop lists were not recorded on the main student record system, but on individual PCS using whatever method the individual academics had developed. Academic advice including such details as checks on prerequisite courses and availability of courses in semesters required for minimum time completion were delivered to students verbally as no provision for recording these in the student records systems existed. Many of these details were recorded on paper in redundant filing systems. Important details such as student progress interview results were stored on paper in files. Interviews with academics at the three universities showed that the simplest ITEM requirements generated by classroom needs had not crossed the minds of even senior academics, let along university administrators. Such fundamental reports as student academic history, timetable clashes between course enrolments, and performance by assessment type were not only not available, but academics were so cynical about the chances of their influencing the development of university-wide systems that they had not even considered the possibility that the student records system in any way was provided to serve their needs. During the course of this research at least three separate IT systems were set up in competition to the university ITEM system by individual departments or schools. 5. DEVELOPMENT OF THE LITMUS TEST Research has shown (Martilla and Mclean 1977) that it may be more effective for users to determine those factors they think important to the effective use of information systems. In a case study by Shah (2001), it was reported that user input raised issues such as communication between the Information Services Department and users, the speed of response of particular sections of the system and the existence of specific reports. A question arises as to the prevalence of features of a system that have importance to users, but have not been emphasised by the developers of systems. A need became apparent for a simple method of determining if a university system had been developed after taking the academic classroom 88 Bill Davey and Arthur Tatnall needs as a stakeholder requirement. The test would best enunciate the principle if it addressed a universal need of a classroom teacher rather than a partially administrative function that had a bearing on the classroom. The question developed after several trials was: Does your system allow you, at your desk, to obtain a list of the performance of students in prerequisite course for your course? This question was trialed on academics from a number of universities and several different discipline areas. In each case the response from the interviewee was immediate and certain: it does not! 6. THE RESEARCH After the preliminary studies, a wider study was conducted to see if the particular issue identified as the Litmus Test was a useful way of identifying weaknesses in a post implementation review of university systems in different cultural and political environments. Individual academics were contacted directly in: three universities in Victoria, Australia; a university in Perth, Western Australia; a university in Queensland, Australia; the Philippines at a major private university; Indonesia at a major state university; Sweden at a modem middle-level university; England at two middle-level universities; Canada at two provincial universities; the USA at a major private university and a research university in the Netherlands. The aim of the very specific application of the Utmus Test in these universities was to determine: Do academics see themselves as clients of a university administrative system? Can instruments be developed for post implementation review that have relevance, independent of cultural considerations? Is the practice of developing student administration systems with little regard for improving educational experience widespread in universities? In eleven of the thirteen cases studied, the Utmus Test was answered in the negative. The vast majority of academics interviewed indicated that there was very little information available in any form that would enable them to tune courses on the basis of student performance or readiness. Only in two cases was information of the type related to the Utmus Test available. The interviewers reported another interesting comment by respondents: several of the respondents indicated that while the information described in the Litmus Test was not available, they could not see why an academic teacher would want that informatioo. Involving the Academic 89 Analysis was then conducted in order to find some explanation of differences in responses. The first issue investigated was the existence of two universities where litmus Test type information was available. An intense study of the systems in each case showed that the systems were much smaller and less integrated than those typical in the other institutions studied. Enquiries found that these systems had been commissioned and written by academics working at the universities concerned. An explanation for this can be found in the background of the developer. In each case the developers were experienced teachers, and it could be that their teaching experience led them to include features of particular use to other teachers. The systems in all other universities studied were, in each case, written by various commercial organisations. It could be postulated that commercial systems would be tailored to respond to the demands of those in the university responsible for funding major software projects. An analysis of the difference between typical commercial systems and the two 'home grown' systems did, in fact, show a high level of integration with fmancial and state reporting functions in the commercial products. This would be consistent with the proposition that developments commissioned by the senior administrative sections of a university have resulted in systems that cater only to common high level administrative needs. Some analysis of interviews was conducted with a view to identifying differences between universities where academics were interested in teaching-data, and those where there was no pressing interest in such data. No differences were found in size, age or general aspects of educational programs. The interviewers reported a difference in culture between the relevant groups of universities. While cultural factors are difficult to define and measure, the general opinion amongst the researchers was that universities might be thought of as being in two main streams. In the first type would be those universities built on a tradition of research and scholarship. In some cases this is consistent with government funding models that support the research priority through separate and generous research funding. In these institutions the interviewers found a smaller proportion of average senior academic workload allocated to teaching duties. The second type of institution could be categorised as teaching universities. In this type of institution teaching hours for senior academics were a larger proportion of total workload and often funding was clearly on the basis of student numbers, with research being 'subtracted' from those funds where possible. Often this type of university was one where the development of the institution was from a technical institute or polytechnic with a very strong teaching tradition. 90 7. Bill Davey and Arthur Tatnall CONCLUSION Although referring to administrative information systems in schools, Fulmer and Frank contend that while these systems have been quite effective in business-related tasks such as inventory control, personnel management, cost analysis and audit, they have been "... far less effective at depicting the conditions of teaching and learning. ... They have not provided quality data for analysing and intervening in processes of teaching and learning." (Fulmer and Frank 1997: 122) In an earlier ITEM paper (Tatnall and Davey 1995) we also argued that educational management systems should make more use of the 'higher levels' of information system and provide decision support and executive information facilities rather than just transaction processing. In this paper likewise, we are arguing that universities are not getting the most out of their student records systems and that more functionality is possible, particularly in the provision of information to assist classroom teachers. From our preliminary investigations it appears that, in their teaching role, academics are not satisfied with their interactions with, and the information available to them from university student records systems. To further investigate this we have developed a simple Litmus Test that can be applied painlessly and with little effort from the academics questioned. Research in the health industry has shown that the traditional methods of post hoc analysis of IT systems often misses important information that would result in increased efficiency of the organisation. The aim of the Litmus Test is to highlight that entire areas of information provision can be ignored by ITEM developers and will never be found if the post hoc review concentrates only on those factors that were included in the specifications. In thirteen universities the Litmus Test found that an entire class of potential user of university student records systems had been ignored. Only in two places was the system supplying this information. Those two counter examples had the common factor that the systems had been written by stakeholders within the institution and hence the issue of providing educational functionality might have been presumed by the unusual nature of the development team. More research is now needed, using the Litmus Test, to see whether this technique is useful for identifying missing functionality. This research would be useful if extended to a broader range of universities and could also be applied in other industry sectors. Involving the Academic 91 REFERENCES Bryce, C.L., Engberg, J.B. and Wholey, D.R. (2000). Comparing the Agreement Among Alternative Models in Evaluating HMO Efficiency. Health Care Services Research 35(2), 509-528. Fuller, F. and William, M. (1994). Computers and Information Processing. Boyd & Fraser, Massachusetts. Fulmer, C.L. and Frank, F.P. (1997). Developing Information Systems for Schools of the Future. In Information Technology in Educational Management for the Schools of the Future. Fung, A.C.W., Visscher, A.J., Barta, B.Z. and Teather, D.C.B. (eds). Chapman & HaU! IFIP, London. Lawrence, D.R., Shah, H.U. and Golder, P.A. (1997). Business Users and the Information Systems Development Process. In The Place ofInformation Technology in Management and Business Education. Barta, B.Z., Tatnall, A. and Juliff, P. (eds). Chapman & Hall! IFIP, London. Lee, B. and Menon, N.M. (2000). Information Technology Value Through Different Normative Lenses. Journal ofManagement Information Systems. 16(4), 99-119. Lindgaard, G. (1994). Usability Testing and System Evaluation. Chapman & Hall, London. Martilla, L.A. and McLean, E.R. (1977). Importance Performance Analysis. Journal of Marlceting (January), 25-33. Meredith, J.R. and Mantel, S.1.J. (1995). Project Management: a ManagerialApproach. John Wiley & Sons Inc, New York. Osher, T.W., Van Kammen, W. and Zaro, S.M. (2001). Family Participation in Evaluating Systems of Care: Family, Research and Service Systems Perspectives. Journal of Emotional and Behavioural Disorders. 9(1), 63-70. Perrin, R. (2000). Fine-Tuning Information Systems to Improve Performance. Healthcare Financial Management. 54(5), 100-102. Post, G.V. (1999). Database Management Systems. McGraw Hill, London. Shah, S.K. (2001). Improving Information Systems Performance Through Client Value Assessment: A Case Study. Review ofBusiness. 22(1/2),37-42. Tatnall, A. and Davey, B. (1995). Executive Information Systems in School Management: a Research Perspective. In World Conference on Computers in EduClltion VI. WCCE'95. Liberating the Learner. Tinsley, J.D. and van Weert, T.J. (eds), Chapman & Hall! IFIP, London. Visscher, A.J. and Bloemen, P.P.M. (1999). Evaluation of the Use of Computer-Assisted Management Information Systems in Dutch Schools. Journal ofResearch on Computing in Education. 32(1),172-181.