From Surf Wiki (app.surf) — the open knowledge base
Computer user satisfaction
How satisfied a user is with a computer program
How satisfied a user is with a computer program
Computer user satisfaction (CUS) is the systematic measurement and evaluation of how well a computer system or application fulfills the needs and expectations of individual users. The measurement of computer user satisfaction studies how interactions with technology can be improved by adapting it to psychological preferences and tendencies.
Evaluating user satisfaction helps gauge product stability, track industry trends, and measure overall user contentment.
Fields like User Interface (UI) Design and User Experience (UX) Design focus on the direct interactions people have with a system. While UI and UX often rely on separate methodologies, they share the goal of making systems more intuitive, efficient, and appealing.
The Problem of Defining Computer User Satisfaction
In the literature, there are a variety of terms for computer user satisfaction (CUS): "user satisfaction" and "user information satisfaction," (UIS) "system acceptance," "perceived usefulness," "MIS appreciation," "feelings about information system's," and "system satisfaction". For our purposes, we will refer to CUS, or user satisfaction. Ang and Koh (1997) describe user information satisfaction as "a perceptual or subjective measure of system success."{{cite journal
According to Doll and Torkzadeh, CUS is defined as the opinion of the user about a specific computer application that they use. Ives and colleagues defined CUS as "the extent to which users believe the information system available to them meets their information requirements."{{cite journal
Several studies have investigated whether or not certain factors influence the CUS. Yaverbaum's study found that people who use their computers irregularly tend to be more satisfied than regular users.
Mullany, Tan, and Gallupe claim that CUS is chiefly influenced by prior experience with the system or an analogue. Conversely, motivation, they suggest, is based on beliefs about the future use of the system.
Applications
Using findings from CUS, product designers, business analysts, and software engineers anticipate change and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections.
Satisfaction measurements are most often employed by companies or organizations to design their products to be more appealing to consumers, identify practices that could be streamlined, harvest personal data to sell, and determine the highest price they can set for the least quality. For example, based on satisfaction metrics, a company may decide to discontinue support for an unpopular service. CUS may also be extended to employee satisfaction, for which similar motivations arise. As an ulterior motive, CUS surveys may also serve to pacify the group being surveyed, as it gives them an outlet to vent frustrations.
Doll and Torkzadeh's definition of CUS is "the opinion of the user about a specific computer application, which they use." Note that the term "user" can refer to both the user of a product and the user of a device to access a product.
The CUS and the UIS
Bailey and Pearson's 39-Factor Computer User Satisfaction (CUS) questionnaire and the User Information Satisfaction (UIS) were both surveys with multiple qualities; that is to say, the survey asks respondents to rank or rate multiple categories. Bailey and Pearson asked participants to judge 39 qualities, dividing them into five groups, each with different scales to rank or rate the qualities. The first four scales were for favorability ratings, and the fifth was an importance ranking. In the group asked to rank the importance for each quality, researchers found that their sample of users rated most important: "accuracy, reliability, timeliness, relevancy, and confidence." The qualities of least importance were found to be "feelings of control, volume of output, vendor support, degree of training, and organizational position of EDP (the electronic data processing or computing department)." However, the CUS requires 39 x 5 = 195 responses. Ives, Olson, and Baroudi, amongst others, thought that so many responses could result in errors of attrition. This indicates that the respondent's failure to return the questionnaire directly correlated with the length of the surveys. This can result in reduced sample sizes and distorted results, as those who return long questionnaires may have differing psychological traits from those who do not. Ives and colleagues developed the User Information Satisfaction (UIS) as a means of addressing this. The UIS only requires the respondent to rate 13 metrics. 2 scales are provided per metric, yielding 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla argued that measuring CUS in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the CUS measurement method is necessary.{{cite journal
The Problem With Dating of Metrics
An early criticism of these measures was that surveys would become outdated as computer technology evolves. This led to the synthesis of new metric-based surveys. Doll and Torkzadeh, for example, produced a metric-based survey for the "end user." They define end-users as those who tend to interact with a computer interface alone without the involvement of operational staff. McKinney, Yoon, and Zahedi developed a model and survey for measuring web customer satisfaction.
Grounding in Theory
Another difficulty with most of these surveys is their lack of a foundation in psychological theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran{{cite journal
Cognitive style
A study showed that during the life of a system, satisfaction from users will on average increase in time as the users' experiences with the system increase. The study found that users' cognitive style (preferred approach to problem solving) was not an accurate predictor of the user's actual CUS. Similarly, developers of the system participated, and they too did not have a strong correlation between cognitive style and actual CUS. However, a strong correlation was observed between 85 and 652 days into using the system. This means that one's manner of thinking and how their attitude towards a particular product became increasingly correlated as time went on. Some researchers have hypothesized that familiarity with a system may cause one to mentally assimilate to accommodate that system. Mullany, Tan, and Gallupe devised a system (the System Satisfaction Schedule (SSS)), which utilizes user-generated qualities and so avoids the problem of dating qualities. They define CUS as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. Motivation, conversely, is based on beliefs about the future use of the system.{{cite journal
Future developments
Currently, scholars and practitioners are experimenting with other measurement methods and further refinements to the definition of CUS. Others are replacing structured questionnaires with unstructured ones, where the respondent is asked simply to write down or dictate everything about a system that either satisfies or dissatisfies them. One problem with this approach, however, is that it tends not to yield quantitative results, making comparisons and statistical analysis difficult.
References
References
- Igersheim, Roy H.. (1976-06-07). "Proceedings of the June 7–10, 1976, national computer conference and exposition on – AFIPS '76". Association for Computing Machinery.
- (1980). "Perceived Usefulness of Information: A Psychometric Examination". Decision Sciences.
- Swanson, E. Burton. (1 October 1974). "Management Information Systems: Appreciation and Involvement". Management Science.
- Maish, Alexander M.. (March 1979). "A User's Behavior toward His MIS". MIS Quarterly.
- (2004-01-01). "The State of Research on Information System Satisfaction". Journal of Information Technology Theory and Application.
- Yaverbaum, Gayle J.. (1988). "Critical Factors in the User Environment: An Experimental Study of Users, Organizations and Tasks". MIS Quarterly.
- "What Is a Customer Satisfaction Survey?".
- (16 January 2024). "Privacy Policy".
- "How to use Pricing Surveys".
- (May 1983). "Development of a Tool for Measuring and Analyzing Computer User Satisfaction". Management Science.
- (1 October 1983). "The measurement of user information satisfaction". Commun. ACM.
- (September 2002). "The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach". Information Systems Research.
- C. M. K. Cheung and M. K. O. Lee, "The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study," ''Proceedings of the 38th Annual Hawaii International Conference on System Sciences'', Big Island, HI, USA, 2005, pp. 175c-175c, doi: 10.1109/HICSS.2005.585.
- (1972). "Work and the nature of man". Staples Press.
- Mullany, Michael John. (2006). "The Use of Analyst-User Cognitive Style Differentials to Predict Aspects of User Satisfaction with Information Systems". Auckland University of Technology.
This article was imported from Wikipedia and is available under the Creative Commons Attribution-ShareAlike 4.0 License. Content has been adapted to SurfDoc format. Original contributors can be found on the article history page.
Ask Mako anything about Computer user satisfaction — get instant answers, deeper analysis, and related topics.
Research with MakoFree with your Surf account
Create a free account to save articles, ask Mako questions, and organize your research.
Sign up freeThis content may have been generated or modified by AI. CloudSurf Software LLC is not responsible for the accuracy, completeness, or reliability of AI-generated content. Always verify important information from primary sources.
Report