2.1. Usability vs. User Experience (UX)
According to ISO 9241 [
16], usability is the extent to which a system, product, or service can be used by specific users to effectively, efficiently, and satisfyingly reach specific goals in a specific context of use. User experience means user perceptions and responses resulting from the use and/or anticipated use of a product, system, or service [
16]. The guidelines in the standard are employed to assess the usability of websites, most often with a questionnaire, such as the System Usability Scale (SUS) [
17] or Travis’s checklist [
18].
The terms ‘usability’ and ‘satisfaction’ are closely linked. Satisfaction is frequently considered to be a variable of usability. Therefore, certain tools, instruments, and usability evaluation scales include satisfaction as a variable. Meanwhile, it is more of a usability consequence than its factor [
19,
20]. ISO/IEC 9126-1:2001 has a model for classifying software quality in terms of a structured set of characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. ISO/IEC 25000:2014 [
21] provides guidance for using the new series of international standards named Systems and Software Quality Requirements and Evaluation (SQuaRE). ISO/IEC 25000:2014 aims to provide a general overview of SQuaRE contents. It also explains the transition process between the old ISO/IEC 9126 and the ISO/IEC 14598 series and SQuaRE [
21]. The usability definition used in ISO/IEC 25010:2023 [
22] is the degree to which specified users can use a product or system to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use [
22].
Effectiveness is the accuracy and completeness linked to how users achieve specific goals. Efficiency means the ratio of resources used in relation to the outcomes, while satisfaction is the extent to which the user’s physical, cognitive, and emotional responses that result from using a system, product, or service meet the user’s needs and expectations [
16]. According to the standard, product or service usability is determined by user-friendliness, especially at the first encounter, ease of use in any subsequent use of the product or service, the pace of learning how to use the product or service, the capability to resolve operating problems by oneself, and general product or service satisfaction. The product is an object created or generated by a person or a ‘machine’. The service means delivering value for the customer by facilitating results the customer wants to achieve. Services can include both human–system interactions and human–human interactions. The system combines interacting elements organised to achieve one or more intended purposes. Meanwhile, an interactive system means a combination of hardware and/or software and/or services and/or people that users interact with to achieve specific goals [
16].
2.2. Usability Metrics for User Experience
Websites have specific functions, such as providing information, contact points, booking capabilities, or payment methods, using tools that exhibit various degrees of usability. Three aspects of system usability quality are the most important for the user: (1) functionality, which are the capabilities of the system; (2) ergonomics, meaning the ability to achieve the intended purpose with the least effort possible; and (3) usability, which is the combination of the degree to which the user reached their goals, the effort required, and perceived use satisfaction. Usability tests are most often conducted as exploratory tests by expert respondents and use heuristics or tools that perform automated algorithmic usability assessments [
20,
23].
A heuristic assessment is a quality assessment process where intuition, experience, or streamlined evaluation principles are essential. It is a decision-making or problem-solving method based on approximate judgment instead of data and calculations. Heuristics are employed in various fields, such as psychology, artificial intelligence, economics, management, and website quality assessment [
24]. Heuristic assessment of website usability investigates the user interface and usability using heuristics or interaction design principles. This technique is used in interface design and evaluation intended to identify potential problems related to user-website interactions [
25].
Heuristics are generic principles developed over years of research on interface design. They draw on philosophy, psychology, and—mainly—experience with human-computer interaction (HCI). One of the most popular usability assessment methods for HCI based on inspection is heuristic evaluation (HE), described by Nielsen and Molich [
26], and then improved by Nielsen [
27]. Often used because of its cost-effectiveness and ease of deployment, HE involves at least one experienced expert who follows a set of guidelines (or heuristics) during a system review (evaluation). This makes HE an economical alternative for empirical usability tests with multiple actual users. Heuristic evaluations are probably the most valuable for assessing an existing system or its prototype early to pinpoint major usability issues.
Heuristic evaluation involves HCI experts exploring a system, identifying usability problems, and classifying each problem as a violation of one or more usability principles or heuristics. The testers need to draft two documents to prepare for such an evaluation session: (1) a project overview describing the objectives, target audiences, and expected usage patterns of the system being tested and (2) a list of heuristics [
28]. Experts or interface designers browse (explore) the website during a heuristic usability evaluation and analyse it in terms of compatibility with the predefined collection of heuristics. It may include an assessment of navigation controls, content layout, responsiveness, perceptibility, and many other factors of interaction quality. Heuristic usability evaluation aims to identify flaws that may hinder or impede user experience. Still, it is one method for assessing usability and it can be combined with such other techniques as algorithmic tests, statistical analysis, or competitive analysis. The synergistic effect of these methods can yield an exhaustive website usability assessment [
23].
Questionnaires have often been used to assess users’ subjective attitudes related to their experience of using a computer system. Human-computer interaction researchers first started developing standardised usability evaluation instruments, such as the UMUX, UMUX-LITE, SUPR-Q, or SUS, in the 1980s [
20,
29,
30]. Although these questionnaires have been built independently and vary in terms of content and form, they all measure the subjective perception of usability [
31,
32].
The Usability Metric for User Experience (UMUX) and its shorter variant, UMUX-LITE, are some of the latest standardised usability questionnaires [
29]. The UMUX is designed to yield results similar to the outcomes of the 10-item System Usability Scale (SUS). It is founded on the ISO 9241-11 [
16] definition of usability [
20,
33]. Psychometric evaluation of the UMUX indicated acceptable levels of reliability (internal consistency), concurrent validity, and sensitivity [
34]. The UMUX-LITE conforms to the technology acceptance model (TAM). The UMUX has four items, using a 7-point Likert scale and Cronbach’s alpha coefficient of 0.94 [
20,
29]. The Standardised User Experience Percentile Rank Questionnaire (SUPR-Q) consists of eight items to measure four website factors: usability, appearance, trust, and loyalty [
20,
30,
35]. According to Sauro [
35], the primary potential advantage of the SUPR-Q over the UMUX is that it can measure more than just a single factor, such as usability. Seven of the eight questions on the SUPR-Q are measured with a 5-point scale where 1 equals ‘strongly disagree’ and 5 equals ‘strongly agree’ [
30,
35].
The Questionnaire for User Interaction Satisfaction (QUIS) was developed as a 27-item, 9-point bipolar scale, representing five latent variables related to the usability construct [
29,
36]. The Questionnaire for User Interaction Satisfaction is indeed used often. For example, Fezi and Wong [
37] invited 32 participants, graphic user interface designers and programmers, to examine the usability of user interface styles for learning a software development suite, namely Adobe Flash CS4, using the QUIS tool. Adinda and Suzianti [
38] investigated the usability of a mobile e-administration application with the QUIS and SUS questionnaires. The study confirmed that the user interface needed redesigning following the principles of UI design and 10 Heuristics of User Interface Design. Fang and Lin [
39] also employed QUIS to compare the usability differences of VR travel software for mobile phones, such as Google Street View, VeeR VR, and Sites in VR. Other scales are available, such as the Software Usability Measurement Inventory (SUMI), consisting of 50 items with a 3-point Likert scale representing five latent variables [
40]. The Post-Study System Usability Questionnaire (PSSUQ) initially consisted of 19 items with a 7-point Likert scale and a ‘not applicable’ (N/A) option. In addition, the Computer System Usability Questionnaire (CSUQ) is its variant for field studies [
41]. Another study adapted the WEBsite USability Evaluation Tool (WEBUSE) [
42] to evaluate a university’s website usability. The researchers assumed the student perspective and searched for associations between usability and user satisfaction [
43]. All these tools are useful for evaluating hardware and software usability and can contribute to improving their usability quality.
2.3. Related Work
Geoportals are integrated web-based systems providing tools for open spatial data sharing and geo-information management online [
3]. Blake et al.’s [
44] analysis demonstrated relatively few studies on geoportal usability. Meanwhile, it is an important determinant of their quality. For example, one characteristic that is critical for geoportal usability is the graphic user interface (GUI). The purpose of the interface is to provide maximum usability while minimising cognitive effort. Accessibility and usability are both commonly used terms to refer to the satisfaction experienced by a service or product user. However, these two concepts are only a few of those used when referring to websites. Usability focuses directly on user experience (subjective perceptions) that emerges from the synergistic effect of design, ergonomics, content, and user interface quality [
45].
The literature offers various methods for investigating the usability of websites, including geoportals. The most common types involve users, survey questionnaires, and heuristic evaluations; case studies have demonstrated that results are similar regardless of the method [
24]. What is more, Komarkova et al. [
24] recommend method mixing to identify more usability issues. Gkonos et al. [
46] noted that geoportals support sharing geospatial data for various purposes, and recent years saw new research areas that these websites yielded. With their detailed spatial datasets, geoportals can aid universities with numerous research activities, including research and education. Martins et al. [
47] conducted a heuristic evaluation of a web map accessible to various devices. Their tests found some components in need of optimisation. Martínez-Falero et al. [
19] employed the System Usability Scale (SUS) to evaluate the technical quality and usability of the SILVANET application using the opinions of an expert panel. The SUS system is one of the most widely used questionnaires to measure the usability and satisfaction of IT systems [
48]. The SUS measures global satisfaction with the system, particularly its subscales of usability and learnability [
49]. Capeleti et al. [
50] found out that the role of geoportals in decision-making grows more relevant and adequate usability of these websites streamlines effective data exploration for experts and amateurs alike. They demonstrated that data-driven decision-making is critical and has become necessary for anyone seeking to gain new knowledge and make apt insights in various contexts.
Capeleti et al. [
50] employed heuristics in their research. User evaluations revealed the need for usability improvements related to the affordance of interactive map elements and information filters. Vaca et al. [
45] verified the usability of the ONTORISK geoportal with online tools and heuristic tests. Bugs et al. [
51] built the WebGIS application with free, easy-to-use tools. It consists of a web mapping service with eligible geospatial data layers where users explore and comment. They then tested its usability to pinpoint its main flaws and benefits. Słomska-Przech et al. [
52] compared the usability of heat maps with different levels of generalisation for basic map user tasks. A user study compared various heat maps that showed the same input data. The participants perceived the more generalised maps as easier to use, although this result did not match the performance metrics.
WebGIS usability design poses new challenges for information architects because user interactions highly depend on the specific map, making them different from interactions with typical user interfaces [
53]. Unrau et al. [
54] noted that WebGIS usability assessment is a difficult task because interactions with sophisticated maps and functions may require expert knowledge and a certain amount of experience necessary to both use the applications and interpret data on thematic maps correctly. They presented their experience as a concept for a remote WebGIS usability assessment, which they believed to be a good alternative for ‘expensive and lengthy in-person user studies’ [
54]. What is more, Unrau and Kray [
55] proposed a new scalable approach that applies visual analytics to logged interaction data with WebGIS, facilitating the interactive exploration and analysis of user behaviour. Abraham [
56] reviewed studies to extract usability problems from previous studies, classify them, and identify critical components of WebGIS applications. His results suggest a significant need for a WebGIS-specific usability assessment framework to support WebGIS-specific usability evaluation and provide generic solutions to reoccurring problems.
The literature research revealed that usability assessment is useful for identifying problematic elements needing optimisation to improve the usability quality of map applications.