Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Association between Autism Spectrum Disorder and Environmental Quality in the United States
Previous Article in Journal
Landslide Recognition Based on Machine Learning Considering Terrain Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retrospective Analysis of Municipal Geoportal Usability in the Context of the Evolution of Online Data Presentation Techniques

by
Karol Król
Department of Land Management and Landscape Architecture, Faculty of Environmental Engineering and Land Surveying, University of Agriculture in Krakow, Balicka 253c, 30-198 Krakow, Poland
ISPRS Int. J. Geo-Inf. 2024, 13(9), 307; https://doi.org/10.3390/ijgi13090307
Submission received: 1 July 2024 / Revised: 22 August 2024 / Accepted: 27 August 2024 / Published: 28 August 2024

Abstract

:
This article aims to assess the usability of selected map portals with a checklist. The methods employed allowed the author to conduct user experience tests from a longer temporal perspective against a retrospective analysis of the evolution of design techniques for presenting spatial data online. The author performed user experience tests on three versions of Tomice Municipality’s geoportal available on the Internet. The desktop and mobile laboratory tests were performed by fourteen experts following a test scenario. The study employs the exploratory approach, inspection method, and System Usability Scale (SUS). The author calculated the Geoportal Overall Quality (GOQ) index to better illustrate the relationships among the subjective perceptions of the usability quality of the three geoportals. The usability results were juxtaposed with performance measurements. Normalised and aggregated results of user experience demonstrated that the expert assessments of the usability of geoportals G1 and G3 on mobile devices were similar despite significant development differences. The overall results under the employed research design have confirmed that geoportal G2 offers the lowest usability in both mobile and desktop modes. The study has demonstrated that some websites can retain usability even considering the dynamic advances in hardware and software despite their design, which is perceived as outdated today. Users still expect well-performing and quick map applications, even if this means limited functionality and usability. Moreover, the results indirectly show that the past resolution of the ‘large raster problem’ led to the aggravation of the issue of ‘large scripts’.

1. Introduction

Spatial data available in web applications are used by the general public, businesses, and public administrations, such as local governments. Local government units are legally obliged to provide access to spatial data, including registers of localities, streets, and addresses, local zoning plans, the municipal asset register, public and agricultural service points, and the heritage register. All these activities require a digital model of reality, which most commonly comes as an interactive WebGIS spatial information system. Its users can conduct spatial analyses that are useful in decision-making related to municipal infrastructure management, monitoring, and planning [1].
This vast array of spatial data is shared through municipal map portals, which also offer numerous auxiliary functions [2]. According to Jiang et al. [3] (p. 1093), ‘geoportals are a consolidated web-based solution to provide open spatial data sharing and online geo-information management’. Tait [4], (p. 34) defined the geoportal as ‘a web site that presents an entry point to geographic content on the web or, more simply, a web site where geographic content can be discovered’. Maguire and Longley [5] described geoportal as a gateway to searching and discovering geospatial content and services such as directories, search tools, community information, support resources, data, and applications. Geoportals act as entry points for people to browse, access, and visualise a large array of data-related products, along with specific features to extract valuable bits of information from such products and summarise and visualise them according to stakeholders’ needs. Local geoportals often enable users to visualise, interpret, and compare in-situ data, sensor data, scientific simulations, and satellite-derived data at the parcel level [6].
Geoportals have specific user interfaces with diverse spatial functions, such as viewing and hiding thematic layers [6]. As a consequence, the quality of these websites is usually evaluated from the point of view of geodata quality [7] and based on comparative (benchmark), algorithmic, and scoring tests [8]. Evaluation of geoportal quality involves selected technical and usability attributes, including performance [9], comfort of use on mobile and desktop devices [10], and accessibility [11]. Geoportal quality is often assessed using online applications with automated, algorithmic, and scoring approaches [12]. Just as often, it is apprised through the subjective judgement of users and experts, i.e., by respondents with checklists. In this case, the focus is usually on usability and functionality [13].
Geoportals, including public ones, should be walk-up-and-use systems. Such systems are designed so that first-time or one-time users can use them properly as intended without tutorials or training. They include ATMs, ticket machines, and digital equipment in public spaces [14]. Nevertheless, geoinformation system publishers focus first on the technical quality of geoportals (back-end), including performance and geodata quality, to model the space as accurately as possible, with usability trailing behind. The main focus is on software functionality rather than the comfort of use. The principal assumption for this approach is that performance and extensive feature sets should compensate for poor usability and information architecture (front-end) [2]. Note further that usability testing is usually performed for newly built geoportals or geoportals that employ recent models, techniques, and design tools, which is only natural [15]. The usability of archaic geoportals from around a decade ago is tested rarely or not at all because they are considered ‘obsolete’. Be that as it may, a comparison of the usability of currently used geoportals with that of archaic geoportals (mostly no longer in use) may offer new insights regarding geoinformation website usability design. This poses a research gap and a reason for investigating current and archaic geoportals regarding the quality of map navigation systems, for example—including the method and scope of switching layers, which are logical geo datasets used to build map compositions.
The article aims to assess the user experience of selected map websites using a checklist against the backdrop of the evolution of design techniques. The methods employed in the study facilitate usability tests from a longer temporal perspective against a retrospective analysis of the evolution of design techniques for presenting spatial data online. The author assumes that the retrospective analysis and juxtaposition of archaic design techniques for map portals with modern solutions may contribute new value to usability studies and dent the belief that websites built with archaic design techniques are less usable on mobile devices. To this end, the author posed the following research questions:
  • Q1: Are geoportals built using archaic design techniques not employed any more today less usable than geoportals in service today?
  • Q2: Do technology changes, including the increase in mobile device usage, prevent the comfortable browsing of geoportals built using archaic design techniques?
This article is divided into the following parts: Section 2 characterises the problems of usability and user experience (UX), followed by standardised usability assessment methods. Next, the author discusses website usability testing focusing on map applications. Section 3 presents the research methodology, including the research object, scenario, and techniques. Section 4 offers the results of mobile and desktop evaluations, followed by a scoring usability assessment. Then, Section 5. Section 6 offers conclusions and considers limitations and practical implications.

2. Background

2.1. Usability vs. User Experience (UX)

According to ISO 9241 [16], usability is the extent to which a system, product, or service can be used by specific users to effectively, efficiently, and satisfyingly reach specific goals in a specific context of use. User experience means user perceptions and responses resulting from the use and/or anticipated use of a product, system, or service [16]. The guidelines in the standard are employed to assess the usability of websites, most often with a questionnaire, such as the System Usability Scale (SUS) [17] or Travis’s checklist [18].
The terms ‘usability’ and ‘satisfaction’ are closely linked. Satisfaction is frequently considered to be a variable of usability. Therefore, certain tools, instruments, and usability evaluation scales include satisfaction as a variable. Meanwhile, it is more of a usability consequence than its factor [19,20]. ISO/IEC 9126-1:2001 has a model for classifying software quality in terms of a structured set of characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. ISO/IEC 25000:2014 [21] provides guidance for using the new series of international standards named Systems and Software Quality Requirements and Evaluation (SQuaRE). ISO/IEC 25000:2014 aims to provide a general overview of SQuaRE contents. It also explains the transition process between the old ISO/IEC 9126 and the ISO/IEC 14598 series and SQuaRE [21]. The usability definition used in ISO/IEC 25010:2023 [22] is the degree to which specified users can use a product or system to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use [22].
Effectiveness is the accuracy and completeness linked to how users achieve specific goals. Efficiency means the ratio of resources used in relation to the outcomes, while satisfaction is the extent to which the user’s physical, cognitive, and emotional responses that result from using a system, product, or service meet the user’s needs and expectations [16]. According to the standard, product or service usability is determined by user-friendliness, especially at the first encounter, ease of use in any subsequent use of the product or service, the pace of learning how to use the product or service, the capability to resolve operating problems by oneself, and general product or service satisfaction. The product is an object created or generated by a person or a ‘machine’. The service means delivering value for the customer by facilitating results the customer wants to achieve. Services can include both human–system interactions and human–human interactions. The system combines interacting elements organised to achieve one or more intended purposes. Meanwhile, an interactive system means a combination of hardware and/or software and/or services and/or people that users interact with to achieve specific goals [16].

2.2. Usability Metrics for User Experience

Websites have specific functions, such as providing information, contact points, booking capabilities, or payment methods, using tools that exhibit various degrees of usability. Three aspects of system usability quality are the most important for the user: (1) functionality, which are the capabilities of the system; (2) ergonomics, meaning the ability to achieve the intended purpose with the least effort possible; and (3) usability, which is the combination of the degree to which the user reached their goals, the effort required, and perceived use satisfaction. Usability tests are most often conducted as exploratory tests by expert respondents and use heuristics or tools that perform automated algorithmic usability assessments [20,23].
A heuristic assessment is a quality assessment process where intuition, experience, or streamlined evaluation principles are essential. It is a decision-making or problem-solving method based on approximate judgment instead of data and calculations. Heuristics are employed in various fields, such as psychology, artificial intelligence, economics, management, and website quality assessment [24]. Heuristic assessment of website usability investigates the user interface and usability using heuristics or interaction design principles. This technique is used in interface design and evaluation intended to identify potential problems related to user-website interactions [25].
Heuristics are generic principles developed over years of research on interface design. They draw on philosophy, psychology, and—mainly—experience with human-computer interaction (HCI). One of the most popular usability assessment methods for HCI based on inspection is heuristic evaluation (HE), described by Nielsen and Molich [26], and then improved by Nielsen [27]. Often used because of its cost-effectiveness and ease of deployment, HE involves at least one experienced expert who follows a set of guidelines (or heuristics) during a system review (evaluation). This makes HE an economical alternative for empirical usability tests with multiple actual users. Heuristic evaluations are probably the most valuable for assessing an existing system or its prototype early to pinpoint major usability issues.
Heuristic evaluation involves HCI experts exploring a system, identifying usability problems, and classifying each problem as a violation of one or more usability principles or heuristics. The testers need to draft two documents to prepare for such an evaluation session: (1) a project overview describing the objectives, target audiences, and expected usage patterns of the system being tested and (2) a list of heuristics [28]. Experts or interface designers browse (explore) the website during a heuristic usability evaluation and analyse it in terms of compatibility with the predefined collection of heuristics. It may include an assessment of navigation controls, content layout, responsiveness, perceptibility, and many other factors of interaction quality. Heuristic usability evaluation aims to identify flaws that may hinder or impede user experience. Still, it is one method for assessing usability and it can be combined with such other techniques as algorithmic tests, statistical analysis, or competitive analysis. The synergistic effect of these methods can yield an exhaustive website usability assessment [23].
Questionnaires have often been used to assess users’ subjective attitudes related to their experience of using a computer system. Human-computer interaction researchers first started developing standardised usability evaluation instruments, such as the UMUX, UMUX-LITE, SUPR-Q, or SUS, in the 1980s [20,29,30]. Although these questionnaires have been built independently and vary in terms of content and form, they all measure the subjective perception of usability [31,32].
The Usability Metric for User Experience (UMUX) and its shorter variant, UMUX-LITE, are some of the latest standardised usability questionnaires [29]. The UMUX is designed to yield results similar to the outcomes of the 10-item System Usability Scale (SUS). It is founded on the ISO 9241-11 [16] definition of usability [20,33]. Psychometric evaluation of the UMUX indicated acceptable levels of reliability (internal consistency), concurrent validity, and sensitivity [34]. The UMUX-LITE conforms to the technology acceptance model (TAM). The UMUX has four items, using a 7-point Likert scale and Cronbach’s alpha coefficient of 0.94 [20,29]. The Standardised User Experience Percentile Rank Questionnaire (SUPR-Q) consists of eight items to measure four website factors: usability, appearance, trust, and loyalty [20,30,35]. According to Sauro [35], the primary potential advantage of the SUPR-Q over the UMUX is that it can measure more than just a single factor, such as usability. Seven of the eight questions on the SUPR-Q are measured with a 5-point scale where 1 equals ‘strongly disagree’ and 5 equals ‘strongly agree’ [30,35].
The Questionnaire for User Interaction Satisfaction (QUIS) was developed as a 27-item, 9-point bipolar scale, representing five latent variables related to the usability construct [29,36]. The Questionnaire for User Interaction Satisfaction is indeed used often. For example, Fezi and Wong [37] invited 32 participants, graphic user interface designers and programmers, to examine the usability of user interface styles for learning a software development suite, namely Adobe Flash CS4, using the QUIS tool. Adinda and Suzianti [38] investigated the usability of a mobile e-administration application with the QUIS and SUS questionnaires. The study confirmed that the user interface needed redesigning following the principles of UI design and 10 Heuristics of User Interface Design. Fang and Lin [39] also employed QUIS to compare the usability differences of VR travel software for mobile phones, such as Google Street View, VeeR VR, and Sites in VR. Other scales are available, such as the Software Usability Measurement Inventory (SUMI), consisting of 50 items with a 3-point Likert scale representing five latent variables [40]. The Post-Study System Usability Questionnaire (PSSUQ) initially consisted of 19 items with a 7-point Likert scale and a ‘not applicable’ (N/A) option. In addition, the Computer System Usability Questionnaire (CSUQ) is its variant for field studies [41]. Another study adapted the WEBsite USability Evaluation Tool (WEBUSE) [42] to evaluate a university’s website usability. The researchers assumed the student perspective and searched for associations between usability and user satisfaction [43]. All these tools are useful for evaluating hardware and software usability and can contribute to improving their usability quality.

2.3. Related Work

Geoportals are integrated web-based systems providing tools for open spatial data sharing and geo-information management online [3]. Blake et al.’s [44] analysis demonstrated relatively few studies on geoportal usability. Meanwhile, it is an important determinant of their quality. For example, one characteristic that is critical for geoportal usability is the graphic user interface (GUI). The purpose of the interface is to provide maximum usability while minimising cognitive effort. Accessibility and usability are both commonly used terms to refer to the satisfaction experienced by a service or product user. However, these two concepts are only a few of those used when referring to websites. Usability focuses directly on user experience (subjective perceptions) that emerges from the synergistic effect of design, ergonomics, content, and user interface quality [45].
The literature offers various methods for investigating the usability of websites, including geoportals. The most common types involve users, survey questionnaires, and heuristic evaluations; case studies have demonstrated that results are similar regardless of the method [24]. What is more, Komarkova et al. [24] recommend method mixing to identify more usability issues. Gkonos et al. [46] noted that geoportals support sharing geospatial data for various purposes, and recent years saw new research areas that these websites yielded. With their detailed spatial datasets, geoportals can aid universities with numerous research activities, including research and education. Martins et al. [47] conducted a heuristic evaluation of a web map accessible to various devices. Their tests found some components in need of optimisation. Martínez-Falero et al. [19] employed the System Usability Scale (SUS) to evaluate the technical quality and usability of the SILVANET application using the opinions of an expert panel. The SUS system is one of the most widely used questionnaires to measure the usability and satisfaction of IT systems [48]. The SUS measures global satisfaction with the system, particularly its subscales of usability and learnability [49]. Capeleti et al. [50] found out that the role of geoportals in decision-making grows more relevant and adequate usability of these websites streamlines effective data exploration for experts and amateurs alike. They demonstrated that data-driven decision-making is critical and has become necessary for anyone seeking to gain new knowledge and make apt insights in various contexts.
Capeleti et al. [50] employed heuristics in their research. User evaluations revealed the need for usability improvements related to the affordance of interactive map elements and information filters. Vaca et al. [45] verified the usability of the ONTORISK geoportal with online tools and heuristic tests. Bugs et al. [51] built the WebGIS application with free, easy-to-use tools. It consists of a web mapping service with eligible geospatial data layers where users explore and comment. They then tested its usability to pinpoint its main flaws and benefits. Słomska-Przech et al. [52] compared the usability of heat maps with different levels of generalisation for basic map user tasks. A user study compared various heat maps that showed the same input data. The participants perceived the more generalised maps as easier to use, although this result did not match the performance metrics.
WebGIS usability design poses new challenges for information architects because user interactions highly depend on the specific map, making them different from interactions with typical user interfaces [53]. Unrau et al. [54] noted that WebGIS usability assessment is a difficult task because interactions with sophisticated maps and functions may require expert knowledge and a certain amount of experience necessary to both use the applications and interpret data on thematic maps correctly. They presented their experience as a concept for a remote WebGIS usability assessment, which they believed to be a good alternative for ‘expensive and lengthy in-person user studies’ [54]. What is more, Unrau and Kray [55] proposed a new scalable approach that applies visual analytics to logged interaction data with WebGIS, facilitating the interactive exploration and analysis of user behaviour. Abraham [56] reviewed studies to extract usability problems from previous studies, classify them, and identify critical components of WebGIS applications. His results suggest a significant need for a WebGIS-specific usability assessment framework to support WebGIS-specific usability evaluation and provide generic solutions to reoccurring problems.
The literature research revealed that usability assessment is useful for identifying problematic elements needing optimisation to improve the usability quality of map applications.

3. Materials and Methods

Heuristic evaluation usually involves an ‘expert review’ by a single evaluator. Research shows that this testing approach is unsatisfactory. A single person cannot find all design imperfections due to the extensive and specialist profile of the websites. In addition, some usability issues emerge on mobile devices, while others are typical of larger displays. Therefore, tests should involve a diversity of devices and multiple evaluators [26].
This study’s desktop and mobile laboratory tests followed a test scenario with a list of actions that an actual geoportal user would perform. The tests involved 14 expert land-surveying engineers familiar with the working principles of geoinformation websites. The size of the expert panel followed the results of Jakob Nielsen and Tom Landauer [57]. Nielsen and Landauer [57] demonstrated that the number of usability issues uncovered in a usability test involving n users can be expressed with Equation (1):
N(1 − (1 − L)n),
where N is the total number of design usability problems, and L is the proportion of usability problems discovered while testing by a single user. Hence, tests with 10 to 15 users can identify most system usability problems.
The research design involves the exploratory approach and inspection-based methods. Before the test proper, each expert was familiarised with the geoportal during a cognitive walkthrough. Next, we conducted the testing session using a test scenario and checklist. The evaluation yielded a subjective assessment of the usability of the geoportals for a direct comparison.

3.1. Research Object

The panel performed user experience tests on three versions of Tomice Municipality’s geoportal: (1) G1: an electronic local zoning plans website (eMPZP) used until 2019 (https://www.tomice.pl/mpzp/), (accessed on 1 July 2024) (2) G2: a municipal geoportal in use until 2022 (http://www.mpzp.tomice.pl/), (accessed on 1 July 2024) and (3) G3: the current municipal map portal (https://sip.gison.pl/tomice), (accessed on 1 July 2024). Note that historically, Tomice Municipality used G2 for the shortest period, which may indicate that it was merely a ‘transition geoportal’ (Figure 1).
Application G1 is based on raster files and JavaScript. The polygon grid is displayed directly over the raster base map with the Maphilight jQuery plugin. The raster base map is displayed in an inline frame (iframe html tag). The website was designed in line with Web 1.0 standards and according to XHTML W3C, which determined the application’s usability and functionality. The two other geoportals are vector-based (Table 1).
These geoportals were selected for usability testing because each is designed differently and their development techniques showcase the evolution of how geoinformation is presented online. Their usage history is recorded in the Internet archive. What is more, they are still available on the Internet and constitute part of the public, local-government spatial information infrastructure.
Geoportal G1 does not offer the option to increase the number of details displayed on the map because the primary data source is a single large raster file (graphic file). To give the user an option to change the number of details shown on a map, the geoportal needs several raster maps, each with different features. This way, the application can display the rasters one by one or one over another (in layers) according to the user’s requests through the application’s scripts, such as JavaScript. However, this entails loading consecutive large rasters in the browser window, significantly reducing user comfort. Geoportals G2 and G3 are free of this weakness because they are more advanced technically and employ geospatial databases. Geoportal G1 has only one thematic layer: it displays plots colour-coded to represent zoning. No other thematic layers can be loaded. Therefore, G1 is limited and confined to one type of information. Also, it offers no geodata download. Thus, its functionality is limited. It is worth noting that the programming architecture of G1 prevents any evolutionary upgrades. The ‘compact design’ of G1 is its advantage. This means that all its components are in a single, consolidated set that can be transferred and run offline on any device while still retaining full functionality. It is not possible with G2 and G3. This shows significant design differences between the geoportals.

3.2. Methodology

The tests were conducted during a cognitive walkthrough and following a test scenario. The test scenario provided a framework for the research process: each evaluator followed the scenario, while the exploratory research and cognitive walkthrough were opportunities for the unguided use of the system.
The cognitive walkthrough is a website quality assessment tool especially useful for usability and perceived use quality [58]. It involves the exploration of the website (front-end) and source code (back-end) to pinpoint potential problems and difficulties users might encounter. The cognitive walkthrough is often employed for ad-hoc interface quality assessment. It can be used in early design phases, during prototyping, or for already deployed websites [59]. The cognitive walkthrough aims to find out which website components need to be optimised. By reviewing a list of issues, designers and web developers can improve the interface to enhance usability and elevate the general usability experience, including the conversion rate.

3.2.1. Test Scenario

User (expert, evaluator) observation during their website use can identify what works and what does not work in the graphic user interface (GUI). It offers insight into the causes of the most common user problems. The observations can help improve the tested application [60]. User behaviour can be monitored in an organised manner when they have specific tasks to complete. Experts emphasise that test scenarios should not involve a mere series of task X, task Z, etc. Instead, the tasks should be embedded in a task scenario with context and grounds for specific actions. The test tasks should be doable and reasonable and cover the main activities a typical user performs in the application. No hints or detailed descriptions of individual steps for the tests are provided [61]. It is recommended that the users note down their remarks on an evaluation card in the form of a checklist.

3.2.2. Usability Assessment Checklist

The System Usability Scale (SUS) was developed and published by John Brooke in 1996 [62]. The SUS is an instrument commonly utilised in the usability testing of commercial products. It reflected a strong need in the usability community for a tool that could quickly and easily collect a user’s subjective rating of a product’s usability. According to Bangor et al. [63], the SUS is a highly robust and versatile tool for usability professionals. It has ten questions concerning basic and yet fundamental aspects of a ‘system’s’ usability. The SUS questionnaire makes use of a 5-point Likert scale ranging from ‘strongly agree’ to ‘strongly disagree’. It is recommended that the evaluators assign scores based on their first impressions of the system. Overthinking it is advised against. If an auditor or respondent believes they are unable to answer a question, they should pick the middle position on the scale and assign three points. Research shows that it is likely that the SUS will continue to be a popular measurement of perceived usability for the foreseeable future [32].
The SUS yields a quality score, a number representing the aggregate measure of the system’s general usability. Each subscore was assumed to range from 1 to 5. This means that ‘strongly disagree’ is worth 1 point and ‘strongly agree’ is worth 5 points (Figure 2). Quantitatively speaking, the experts could assign a maximum of 70 points to each question (14 experts × 5 points), which would mean they are unanimous.
Aggregation of the points assigned in the SUS questionnaire reflected the overall perceived use quality of the application, minding that some diagnostic variables (SUS questions) stimulate a higher score (higher-the-better), and some hinder the score (smaller-the-better).

3.2.3. Aggregate Quality Score

The SUS results were normalised using zero unitarisation. First, the author classified variables 1, 3, 5, 7, and 9 as bigger-the-better variables (BTB) and the remaining variables as smaller-the-better variables (STB) (Appendix A). Bigger-the-better variables should reach the highest values possible (the highest number of points), which is favourable for the investigated phenomenon. For the dependent variable to increase, smaller-the-better variables are expected to reach the lowest values possible, the lowest number of points, which is favourable for the investigated phenomenon. The normalisation followed Equation (2) for bigger-the-better variables and Equation (3) for smaller-the-better variables [64].
z i j = x i j m i n i { x i j } r j ,
z i j = m a x i x i j x i j   r j ,
where: zij ∈ [0,1], and:
zij is a normalised variable,
zij = 0 ⇔ xij = mini{xij}, zij = 1 ⇔ xij = maxi{xij},
xij is the value of a non-normalised variable,
mini{xij} is the minimum value of the variable before normalisation,
maxi{xij} is the maximum value of the variable before normalisation,
rj is the range for the jth variable.
Values of the normalised SUS results are unitless and range from 0 to 1. Therefore, the maximum normalised score for each geoportal is 10. Normalisation paves the way for adding up the results and characterising each geoportal with the GOQ synthetic quality index (geoportal overall quality) (Equation (4)).
G O Q i = k = 1 10 X k i
where:
GOQ is the Geoportal Overall Quality,
Xk are the diagnostic variables,
i is the time interval.
Normalisation of usability results can be useful in comprehending associations between the investigated geoportals better; the higher the GOQ, the higher the usability score.

3.3. Performance Audit

Website performance hinges primarily on design decisions, including techniques and components used to create it. The functionality of one of the test geoportals (G1) is based on a raster base map, which is a large graphic file. Studies from over two decades ago demonstrated that the primary cause of website performance delays—leading to lower usability—were large graphic files (raster files) [65]. Today, slow rendering and performance of websites and web applications are mostly due to a synergistic impact of the following factors: data server delays, complex website back-end architecture, an excessive number of components and too many ‘fancy widgets’ rather than using large raster files [66]. The article compares geoportal usability results with their performance metrics to confirm these findings.
The geoportals’ performance was tested in a web browser window (desktop mode) using four test applications with a track record of similar measurements [67]: (1) GTmetrix, (2) Pingdom, (3) PageSpeed Insights, and (4) GiftOfSpeed. The author considered values of synthetic performance indices: Performance (GTmetrix, Pingdom, PageSpeed Insights), Structure (GTmetrix), and Speed Score (GiftOfSpeed). Next, the usability results were juxtaposed with performance metrics.

4. Results

4.1. Results for the Mobile Mode

The experts assessed the mobile comfort of use of G1 as the worst and G3 as the best (SUS question 1). This means that they believed the mobile use comfort of G3 to be the best. The experts found using none of the geoportals complicated (SUS question 2), which is also reflected in SUS question 3 because they evaluated all the geoportals as easy to use (Table 1). Moreover, all the evaluators agreed that no expert aid was necessary to use the geoportals on mobile devices (SUS question 4).
The experts highly appreciated the functional integrity of all the geoportals, with G2 scoring the lowest and G3 the highest result. It is corroborated by responses to SUS question 6 (Table 2).
The experts agreed that the tested geoportals are coherent IT systems. They found G1 the easiest to master, while G2 was the most challenging (SUS question 7). This was confirmed by answers to SUS question 8, where the experts indicated G2 as the least usable. Furthermore, they considered it as instilling the ‘least confidence’, which is defined as ‘confidence in using the system’. They perceived G2 as the most challenging to use and requiring the most time to master its functions. In summary, the evaluators assessed G2’s mobile usability as the lowest under the employed research design.

4.2. Results for the Desktop Mode

The experts agreed that the geoportal they would use the most in the desktop mode was G3 (SUS question 1). It was also evaluated as the least complicated to use (SUS question 2). The most complicated to use was G2 (Table 3). This conclusion was confirmed by answers to SUS question 3, according to which G2 was the most complicated to use compared to the others. The evaluators believed no help from experts or technical assistance was needed to use any of the geoportals (SUS question 4), but G2 was rated the lowest. Moreover, G3 had the best feature integration, while G1, had the poorest, which is consistent with answers to SUS question 6. The experts agreed they had no problems learning how to use the geoportals in the desktop mode, even though G2 received the lowest score.
The least inconvenient to use in the desktop mode was G3, while G1 caused the greatest problems. The experts felt the least confident using G2 and G1. At the same time, they believed none of the geoportals required preparatory training or preliminary reading. G2 had the worst result in this domain.

4.3. Aggregate Results

G3 had the highest results for both mobile and desktop modes: 7.78 and 7.9, respectively. G2 fared the worst: 0.07 and 1.91, respectively (Appendix A, Table A1). G1 had scores of 7.07 and 4.53. Normalised and aggregated results demonstrated that the expert rating of the usability of geoportals G3 and G1 on mobile devices was similar despite significant design differences. Note that G1 is based on raster maps, JavaScript, and inline frames. Moreover, it is not responsive, which means it works and looks the same regardless of the device type and viewport size. At the same time, its architecture and operation are relatively simple, which may enhance its positive user perception.
The least usable among the three was G2, with the lowest GOQ. It seems to indicate that it is not enough to make a website responsive. The execution of the responsiveness is just as important because it affects the comfort of the use of the responsive version. Still, G3 had the best usability score for the mobile mode. It was also awarded the highest score on desktop devices (Appendix A, Table A2). All measurements confirmed the lowest usability of G2 under the employed research design.

4.4. Performance Audit Results

The performance audit demonstrated that each consecutive version of Tomice Municipality’s geoportal performed worse than the previous one. According to the test applications, geoportal G1 offered the best performance despite its development techniques being perceived as outdated today. The synthetic performance index for G1 reached no less than 90% for all measurements, which is an excellent result according to the scale adopted in the article (Table 4). Geoportal G2 had a slightly lower result, while G3—the current official geoportal of Tomice Municipality—fared the worst. The performance of G3 was around the acceptance threshold of 50% (except for measurements with Pingdom), which is an unsatisfactory result, according to the scale.
The measurement results are now shown in radar charts (Figure 3). Values of the Performance index measured with the selected test applications determine the sides of the polygons in the charts (Figure 3a). The greater the area of the figures, the better the performance. The visualisation shows that the performance of G1 (the ‘archaic’ WebGIS) is approximately two times better than that of G3, which is in use today.
Figure 3b shows the results of the aggregate usability measurement (according to GOQ) in the desktop mode (GOQ D) and mobile mode (GOQ M). The chart reveals that the applications are slightly more usable on mobile devices according to the synthetic index.

5. Discussion

Although G1 is an archaic application following XHTML and based on raster maps and inline frames, its usability was evaluated as better than that of G2, which is responsive and follows the current design recommendations. This means that archaic design techniques are not ‘doomed to oblivion’. When used correctly, they can build a website that is user-friendly for a long time. Note, however, that the tests focused on user experience, not the functionality of the geoportals. Hence, archaic design techniques may turn out to be incapable of ensuring the right level of application functionality, even though it may still be comfortable to use, making it less competitive compared to applications built with modern design techniques. They offer on-map measurements, managing large datasets (also in real-time), while maintaining adequate performance, which is out of reach of archaic design techniques. It is noteworthy that these considerations are part of the discussion on what is more relevant to users: usability or functionality. The answer seems to be relative and context-dependent.

5.1. Subject Matter and Aim of UX Tests

The characteristics of geoportals stem primarily from the fact that [68] (p. 45): ‘geoportal is a website that is considered an entry point to geographic content on the web or, more simply, a website where geographic content can be discovered’. Butkovic et al. [69] (p. 79) noted that ‘geoportals serve as intermediaries between users and geospatial resources, enabling the discovery and access of digital geospatial data’. This is why universal heuristics may be ineffective at detecting usability/UX problems specific to geoportals. This inspired Quiñones et al. [68] to devise ten heuristics for geoportal UX evaluation. Still, after UX problems are identified with a selected heuristic, a detailed investigation is needed to determine the proper corrective actions, such as what to optimise or what function to implement to improve UX. Butkovic et al. [69] emphasised that UX designers, testers, and auditors face the challenge of the complex and dynamic character of geoportals due to the highly interactive user interface and dynamic presentation of spatial data. This makes any unambiguous identification of causes of poor user experience very hard. Butkovic et al. [69] proposed a new framework for measuring user experience in response to these challenges. They identified and adjusted traditional measurement factors to the geospatial context and employed new criteria for evaluating cartographic visualisation and analytical capabilities of geoportals. Król et al. [20] developed the Functionality Assessment Checklist (FAC) for evaluating geoportals useful in planning sustainable tourism. The FAC is a set of functions the authors believe a geoportal can have. Although Król et al. [20] investigated functionality rather than usability, they pointed out that both contribute to high geoportal quality. Ferrer et al. [70] came up with guidelines for developing interactive maps (EMMAP—Energy Marine MAP) with georeferenced information on variables associated with marine energy from the perspective of user experience. He et al. [15] created a method for testing geoportal usability under their GeoTest research project. The method is based on the ISO 9241-11 framework. It splits the usability evaluation into three components, namely effectiveness, efficiency, and satisfaction. Komarkova et al. [24] reported a study where usability user testing, survey, and heuristic evaluation were combined to evaluate the usability of the Prague Geoportal. Gkonos et al. [46] introduced a new GUI of the Geodata Versatile Information Transfer environment geoportal. They presented the results of a between-subjects study with 20 participants, evaluating and comparing its previous and latest versions.
The studies mentioned above demonstrate that geoportals are considered a separate group of websites with a unique profile and technical specifications. Therefore, researchers often employed adjusted guidelines, standards, checklists, concepts, audit templates, frameworks, etc., most often used to evaluate ‘typical websites’ to make them compatible with geoportals. It is not a new approach. Some research tools and techniques are frequently modified and adapted to advancing technology and/or design techniques. They are also customised to evaluate specific types of websites, such as online stores or e-commerce websites [71].
The present article follows a different approach. It employs a tested and recognised method, the System Usability Scale (SUS) [62]. It has not been modified or adapted for geoportals. Note that most of the studies discussed above aimed at devising a method for testing geoportal usability/UX. It could be done only with a new tool or by modifying an existing method, tool, etc. The research objective was not to assess the usability/UX of a geoportal or geoportals to improve them. Instead, the articles aimed to devise a method/tool for assessing geoportal quality in terms of usability/UX. This assumption is critical for the employed research design. This research aimed to determine the usability of three different WebGIS map applications. The primary interest under the research design was the user and their ‘first impression’ of a specific website rather than attributes of research tools and improvement of their effectiveness for geoportal research. There is one more approach to geoportal UX testing, which focuses on practical results. It involves laboratory usability/UX tests that pinpoint specific interface components needing optimisation to improve the geoportal’s quality [72]. Therefore, the literature review identified no fewer than three types of research on geoportal UX depending on the research objective: (1) studies aimed at developing new or refining existing UX assessment methods and/or tools to adapt them for geoportal quality evaluations; (2) studies of general geoportal usability/UX assessment to compare several geoportals, for example; (3) in-depth, technical, engineering, design, and case studies identifying specific components of geoportals to be optimised so that their usability/UX can be improved.

5.2. The High Quality of a Geoportal Consists of Usability/UX and Performance

The primary function of geoportals is to provide spatial data. They give access to geoinformation and geoservices. Jiang et al. [3] demonstrated that they usually offer access to distributed data systems with maps, data search functions, and data downloads. Some offer online analysis and processing services, enhanced semantic search engines, and dynamic visualisation tools [3]. The characteristic graphic user interface is an important feature of these websites. It ensures spatial data and service availability [73]. Geoportals should allow users to visually (spatially, thematically and temporally) navigate spatial data, select desired datasets and areas, and directly download data using the graphic interface. On this basis, Kellenberger et al. [73] proposed three key recommendations for designing geoportals: (1) a full-screen map with simple navigation, (2) instant user feedback, and (3) multifunctional search. Note, however, that one precondition of high usability is to ensure geoportal performance and responsiveness so that it can be browsed on mobile devices and large screens [6].
Studies show that UX laboratory testing is useful for diagnosing design flaws, particularly for websites and web applications. Such tests usually involve respondents (experts, users), are case studies, and employ checklists and survey questionnaires [74]. Indeed, usability is also defined by such quality attributes as performance (load speed) and responsiveness, which is how the website is adapted to the device’s viewport. This means that performance [75] and responsiveness [76] audits are, in fact, types of usability audits targeted at specific quality attributes. Importantly, performance and responsiveness audits can be done using web applications, which makes them automated and algorithmic. Consequently, human usability tests can be enhanced with application procedures. It applies to geoportals as well [67].
According to the test applications, the best-performing geoportal was G1, where archaic design techniques are used to build its two main components: (1) large rasters (base maps) and (2) an inline frame (iframe). These techniques were abandoned mostly to improve functionality, performance, and SEO visibility of geoportals to crawlers. The geoportal’s structure is founded on HTML (DHTML), Cascading Style Sheets (CSS), and JavaScript. It no longer offers high functionality and usability, especially on mobile devices. These solutions have now been replaced by dynamic scripts and programming libraries that offer sophisticated functions for presenting and sharing spatial data [20]. Still, a surge in shared data volume and availability of advanced geospatial services have been detrimental to performance. Therefore, although the abandonment of design techniques that were causing functionality and performance issues a decade ago, such as large raster files, worked, a ‘new problem’ arose after some time: the problem of large scripts and programming libraries, often implemented from third-party sources. The usability tests indirectly revealed that users still expect well-performing and quick applications, even if it means limited functionality and usability. A question arises whether the performance optimisation techniques used today, such as code minification and compression, are sufficient [12] to ensure the proper performance of such extensive applications like geoportals. The question remains valid, and further research might shed some light on the matter.

6. Conclusions

According to the author’s best knowledge, it is one of few or even the first studies of its kind, where geoportal usability is evaluated with a temporal perspective in the context of online spatial data presentation development. The article compares the user experience with three geoportals of Tomice Municipality, from the oldest to the latest. Each was developed using a different design technique and used in a specific period. All the geoportals are still available on the Internet, but only one is valid, while the others are kept for documentation purposes.
The employed research design does not provide a list of audit recommendations that would enumerate design flaws that need to be amended to improve usability. Still, the research has demonstrated that some websites can retain usability even considering the dynamic advances in hardware, software, and development techniques despite their design being perceived as outdated today. Selected constituents of usability, such as performance (measured algorithmically) or subjective ‘ease and simplicity of use’ may affect general user experience to a significant degree. It is evident in the fact that G1 received a high usability score despite its archaic design, which ironically makes it easy to use and efficient: it was loaded quickly on any device and (still) retains its functionality. Notably, each of the geoportals was ‘valid and modern’ in the Internet era when it was used. At the time, it was considered usable, useful, and functional.

Research Limitations and Practical Implications

The study is, in a way, limited by the single usability assessment method. It can assess the overall user experience without diving into the reasons for high or low scores. This research design offers no way of obtaining a list of detailed audit recommendations useful for website optimisation. It may, however, be a starting point for in-depth usability studies.
The first SUS question (1. I think I would like to use this system frequently) may pose certain difficulties when assessing the usability of geoinformation systems. The respondent’s or auditor’s answer may be guided by the very nature of the application. Geoinformation systems are used to find specific spatial information, such as on the area, location, infrastructure, land zoning, or protected areas or points in space. These systems are not commonly used, for example, for entertainment. As a result, if a study involves many respondents, they may answer that they would not use the system, but not because of usability issues; they would simply not need to use the geoportal. Therefore, it is important to remind the respondents from time to time during the test that they assess usability, which is the comfort of use, not usefulness or functionality.
The mobile tests were done on various smartphones, while desktop tests involved a group of the same desktop computers. This means the mobile testing environment was diversified, and the desktop testing environment was uniform. Nevertheless, the author assumed that considering the hardware and software standardisation today, the diversified mobile testing environment should be beneficial for the results. Moreover, the usability of the geoportals depends mostly on their design, including design techniques. They are also made in the client-server architecture, which makes the user’s operating system less relevant.

Funding

Co-financed by the Minister of Science under the ‘Regional Initiative of Excellence’ programme. Agreement No. RID/SP/0039/2024/01. Subsidised amount PLN 6,187,000.00. Project period 2024–2027.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All trademarks and registered trademarks mentioned herein are the property of their respective owners. The company and product names used in this document are for identification purposes only.

Acknowledgments

The author wishes to express his gratitude to the reviewers for their constructive criticism, which contributed to the final content of the paper. The paper was written at the Digital Cultural Heritage Laboratory (https://culturalheritage.urk.edu.pl, accessed on 1 July 2024), part of the Department of Land Management and Landscape Architecture at the Faculty of Environmental Engineering and Land Surveying of the University of Agriculture in Krakow, Poland.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1. Normalised SUS results for the mobile mode.
Table A1. Normalised SUS results for the mobile mode.
Type of VariableBTBNSTBNBTBNSTBNBTBNSTBNBTBNSTBNBTBNSTBNGOQ
Variable CodeX1X2X3X4X5X6X7X8X9X10
G1430.000290.5611.000171490.400281661.000260.167571.0002517.07
G2440.071330570.000210450.000320560.000270480.0003100.07
G3571.000251600.750200.25551.000290.75580.200211571.000260.8337.78
min.4302505701704502805602104802500.07
max.5713316112115513216612715713117.78
BTB—bigger-the-better, STB—smaller-the-better. G1: https://www.tomice.pl/mpzp; G2: http://www.mpzp.tomice.pl; G3: https://sip.gison.pl/tomice, (accessed on 1 July 2024).
Table A2. Normalised SUS results for the desktop mode.
Table A2. Normalised SUS results for the desktop mode.
Type of VariableBTBNSTBNBTBNSTBNBTBNSTBNBTBNSTBNBTBNSTBNGOQ
Variable CodeX1X2X3X4X5X6X7X8X9X10
G1420.000270.333631.000161480.000280611.000340580.2002314.53
G2480.375290590.000200520.364260.5560.000260.667570.0002701.91
G3581.000231620.750180.5591.000241580.400221621.000260.257.90
min.4202305901604802405602205702301.91
max.5812916312015912816113416212717.90
BTB—bigger-the-better, STB—smaller-the-better. G1: https://www.tomice.pl/mpzp; G2: http://www.mpzp.tomice.pl; G3: https://sip.gison.pl/tomice, (accessed on 1 July 2024).

References

  1. Olszewski, R.; Pałka, P.; Wendland, A.; Majdzińska, K. Application of Cooperative Game Theory in a Spatial Context: An Example of the Application of the Community-Led Local Development Instrument for the Decision Support System of Biogas Plants Construction. Land Use Policy 2021, 108, 105485. [Google Scholar] [CrossRef]
  2. Resch, B.; Zimmer, B. User Experience Design in Professional Map-Based Geo-Portals. ISPRS Int. J. Geo-Inf. 2013, 2, 1015–1037. [Google Scholar] [CrossRef]
  3. Jiang, H.; Van Genderen, J.; Mazzetti, P.; Koo, H.; Chen, M. Current Status and Future Directions of Geoportals. Int. J. Digit. Earth 2020, 13, 1093–1114. [Google Scholar] [CrossRef]
  4. Tait, M. Implementing Geoportals: Applications of Distributed GIS. Comput. Environ. Urban Syst. 2005, 29, 33–47. [Google Scholar] [CrossRef]
  5. Maguire, D.; Longley, P. The Emergence of Geoportals and Their Role in Spatial Data Infrastructures. Comput. Environ. Urban Syst. 2005, 29, 3–14. [Google Scholar] [CrossRef]
  6. Granell, C.; Miralles, I.; Rodríguez-Pupo, L.; González-Pérez, A.; Casteleyn, S.; Busetto, L.; Pepe, M.; Boschetti, M.; Huerta, J. Conceptual Architecture and Service-Oriented Implementation of a Regional Geoportal for Rice Monitoring. ISPRS Int. J. Geo-Inf. 2017, 6, 191. [Google Scholar] [CrossRef]
  7. Neis, P.; Zielstra, D. Recent Developments and Future Trends in Volunteered Geographic Information Research: The Case of OpenStreetMap. Future Internet 2014, 6, 76–106. [Google Scholar] [CrossRef]
  8. Helbich, M.; Amelunxen, C.; Neis, P.; Zipf, A. Comparative Spatial Analysis of Positional Accuracy of OpenStreetMap and Proprietary Geodata. In Proceedings of the GI_Forum 2012: Geovisualization, Society and Learning, Salzburg, Germany, 4–6 July 2012. [Google Scholar]
  9. Zunino, A.; Velázquez, G.; Celemín, J.; Mateos, C.; Hirsch, M.; Rodriguez, J. Evaluating the Performance of Three Popular Web Mapping Libraries: A Case Study Using Argentina’s Life Quality Index. ISPRS Int. J. Geo-Inf. 2020, 9, 563. [Google Scholar] [CrossRef]
  10. Horbiński, T.; Cybulski, P. Similarities of global web mapping services functionality in the context of responsive web design. Geod. Cartogr. 2018, 67, 159–177. [Google Scholar] [CrossRef]
  11. McMahon, D.D.; Smith, C.C.; Cihak, D.F.; Wright, R.; Gibbons, M.M. Effects of Digital Navigation Aids on Adults with Intellectual Disabilities: Comparison of Paper Map, Google Maps, and Augmented Reality. J. Spec. Educ. Technol. 2015, 30, 157–165. [Google Scholar] [CrossRef]
  12. Król, K. Comparative analysis of selected online tools for JavaScript code minification. A case study of a map applica-tion. Geomat. Landmanag. Landsc. 2020, 2, 119–129. [Google Scholar] [CrossRef]
  13. Cummings, S.; White, N.; Schoenmakers, M.; van Reijswoud, V.; Koopman, M.; Zielinski, C.; Mugarura, C.; Assa, R.; Harish, S. Checklist for the development of portals for international development. Knowl. Manag. Dev. J. 2019, 14, 83–94. Available online: https://www.km4djournal.org/index.php/km4dj/article/view/384 (accessed on 1 July 2024).
  14. Marshall, P.; Morris, R.; Rogers, Y.; Kreitmayer, S.; Davies, M. Rethinking “Multi-User”: An in-the-Wild Study of How Groups Approach a Walk-up-and-Use Tabletop Interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Montréal, QC, Canada, 22–27 April 2021; ACM: Vancouver, BC, Canada, 2011; pp. 3033–3042. [Google Scholar] [CrossRef]
  15. He, X.; Persson, H.; Östman, A. Geoportal usability evaluation. Int. J. Spat. Data Infrastruct. Res. 2012, 7, 88–106. [Google Scholar] [CrossRef]
  16. ISO 9241-11:2018; Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts. ISO: Geneva, Switzerland, 2018. Available online: https://www.iso.org/standard/63500.html (accessed on 1 July 2024).
  17. Arthana, I.K.R.; Pradnyana, I.M.A.; Dantes, G.R. Usability Testing on Website Wadaya Based on ISO 9241-11. J. Phys. Conf. Ser. 2019, 1165, 012012. [Google Scholar] [CrossRef]
  18. Travis, D. 247 Web Usability Guidelines. Available online: http://www.userfocus.co.uk/resources/guidelines.html (accessed on 1 July 2024).
  19. Martínez-Falero, J.; Ayuga-Tellez, E.; Gonzalez-Garcia, C.; Grande-Ortiz, M.; Garrido, A. Experts’ Analysis of the Quality and Usability of SILVANET Software for Informing Sustainable Forest Management. Sustainability 2017, 9, 1200. [Google Scholar] [CrossRef]
  20. Król, K.; Zdonek, D.; Sroka, W. Functionality Assessment Checklist for Evaluating Geoportals Useful in Planning Sustainable Tourism. Sustainability 2024, 16, 5242. [Google Scholar] [CrossRef]
  21. ISO/IEC 25000:2014; Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (SQuaRE)—Guide to SQuaRE (Edition 2, 2014). ISO: Geneva, Switzerland, 2014. Available online: https://www.iso.org/standard/64764.html (accessed on 1 July 2024).
  22. ISO/IEC 25010:2023; Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (SQuaRE)—Product Quality Model (Edition 2, 2023). ISO: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/78176.html (accessed on 1 July 2024).
  23. Namoun, A.; Alrehaili, A.; Tufail, A. A Review of Automated Website Usability Evaluation Tools: Research Issues and Challenges. In Design, User Experience, and Usability: UX Research and Design; Soares, M.M., Rosenzweig, E., Marcus, A., Eds.; Springer International Publishing: Cham, Switzerland, 2021; Volume 12779, pp. 292–311. [Google Scholar] [CrossRef]
  24. Komarkova, J.; Sedlak, P.; Struska, S.; Dymakova, A. Usability Evaluation the Prague Geoportal: Comparison of Methods. In Proceedings of the 2019 International Conference on Information and Digital Technologies (IDT), Zilina, Slovakia, 25–27 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 223–228. [Google Scholar] [CrossRef]
  25. Jabłonowski, J.; Gołębiowska, I. Multi-Criteria Assessment of the Official Map Services of Capital City of Warsaw. Pol. Cartogr. Rev. 2019, 51, 67–79. [Google Scholar] [CrossRef]
  26. Nielsen, J.; Molich, R. Heuristic Evaluation of User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Empowering People—CHI ’90, Seattle, WA, USA, 1–5 April 1990; ACM Press: New York, NY, USA, 1990; pp. 249–256. [Google Scholar] [CrossRef]
  27. Nielsen, J. Heuristic evaluation. In Usability Inspection Methods; Nielsen, J., Mack, R.L., Eds.; John Wiley & Sons, Inc.: New York, NY, USA, 1994. [Google Scholar]
  28. Levi, M.D.; Conrad, F.G. Usability Testing of World Wide Web Sites. In Proceedings of the CHI ’97 Extended Abstracts on Human Factors in Computing Systems Looking to the Future—CHI ’97, Atlanta, GA, USA, 22–27 March 1997; ACM Press: New York, NY, USA, 1997; p. 227. [Google Scholar] [CrossRef]
  29. Berkman, M.I.; Karahoca, D. Re-Assessing the Usability Metric for User Experience (UMUX) Scale. J. Usability Stud. 2016, 11, 89–109. Available online: https://uxpajournal.org/assessing-usability-metric-umux-scale/ (accessed on 1 July 2024).
  30. Klug, B. An Overview of the System Usability Scale in Library Website and System Usability Testing. Weav. J. Libr. User Exp. 2017, 1. [Google Scholar] [CrossRef]
  31. Borsci, S.; Federici, S.; Bacci, S.; Gnaldi, M.; Bartolucci, F. Assessing User Satisfaction in the Era of User Experience: Comparison of the SUS, UMUX, and UMUX-LITE as a Function of Product Experience. Int. J. Hum.–Comput. Interact. 2015, 31, 484–495. [Google Scholar] [CrossRef]
  32. Lewis, J.R. Measuring Perceived Usability: The CSUQ, SUS, and UMUX. Int. J. Hum.–Comput. Interact. 2018, 34, 1148–1156. [Google Scholar] [CrossRef]
  33. Finstad, K. The Usability Metric for User Experience. Interact. Comput. 2010, 22, 323–327. [Google Scholar] [CrossRef]
  34. Lewis, J.R. Critical Review of “The Usability Metric for User Experience”. Interact. Comput. 2013, 25, 320–324. [Google Scholar] [CrossRef]
  35. Sauro, J. SUPR-Q: A comprehensive measure of the quality of the website user experience. J. Usability Stud. 2015, 10, 68–86. Available online: https://uxpajournal.org/supr-q-a-comprehensive-measure-of-the-quality-of-the-website-user-experience/ (accessed on 1 July 2024).
  36. Chin, J.P.; Diehl, V.A.; Norman, L.K. Development of an Instrument Measuring User Satisfaction of the Human-Computer Interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI ’88, Washington, DC, USA, 15–19 May 1988; ACM Press: New York, NY, USA, 1988; pp. 213–218. [Google Scholar] [CrossRef]
  37. Feizi, A.; Wong, C.Y. Usability of user interface styles for learning a graphical software application. In Proceedings of the 2012 International Conference on Computer & Information Science (ICCIS), Kuala Lumpur, Malaysia, 12–14 June 2012; IEEE: Piscataway, NJ, USA, 2012; Volume 2, pp. 1089–1094. [Google Scholar] [CrossRef]
  38. Adinda, P.P.; Suzianti, A. Redesign of User Interface for E-Government Application Using Usability Testing Method. In Proceedings of the 4th International Conference on Communication and Information Processing, Qingdao, China, 2–4 November 2018; ACM: New York, NY, USA, 2018; pp. 145–149. [Google Scholar] [CrossRef]
  39. Fang, Y.-M.; Lin, C. The Usability Testing of VR Interface for Tourism Apps. Appl. Sci. 2019, 9, 3215. [Google Scholar] [CrossRef]
  40. Kirakowski, J. The software usability measurement inventory: Background and usage. In Usability Evaluation in Industry; Jordan, P., Thomas, B., Weerdmeester, B.A., McClelland, I.L., Eds.; Taylor & Francis: London, UK, 1996; pp. 169–178. [Google Scholar]
  41. Lewis, J.R. Psychometric Evaluation of the Post-Study System Usability Questionnaire: The PSSUQ. Proc. Hum. Factors Soc. Annu. Meet. 1992, 36, 1259–1260. [Google Scholar] [CrossRef]
  42. Chiew, T.K.; Salim, S.S. WEBUSE: Website Usability Evaluation Tool. Malays. J. Comput. Sci. 2003, 16, 47–57. [Google Scholar]
  43. Karani, A.; Thanki, H.; Achuthan, S. Impact of University Website Usability on Satisfaction: A Structural Equation Modelling Approach. Manag. Labour Stud. 2021, 46, 119–138. [Google Scholar] [CrossRef]
  44. Blake, M.; Majewicz, K.; Tickner, A.; Lam, J. Usability analysis of the Big Ten Academic Alliance Geoportal: Findings and recommendations for improvement of the user experience. Code4Lib J. 2017, 38. Available online: https://journal.code4lib.org/articles/12932 (accessed on 1 July 2024).
  45. Duque Vaca, M.; Romero Canizares, F.; Jimenez Builes, J. Validating a Georeferenced Map Viewer Through Online and Manual Tests. In Proceedings of the 2019 International Conference on Inclusive Technologies and Education (CONTIE), San Jose del Cabo, Mexico, 30 October–1 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 91–916. [Google Scholar] [CrossRef]
  46. Gkonos, C.; Iosifescu Enescu, I.; Hurni, L. Spinning the Wheel of Design: Evaluating Geoportal Graphical User Interface Adaptations in Terms of Human-Centred Design. Int. J. Cartogr. 2019, 5, 23–43. [Google Scholar] [CrossRef]
  47. Martins, V.E.; Schmidt, M.A.R.; Delazari, L.S. Selecting Usability Heuristics to Evaluate Responsive Maps: Case Study WebGIS UFPR CampusMap. Abstr. Int. Cartogr. Assoc. 2021, 3, 1–2. [Google Scholar] [CrossRef]
  48. Kortum, P.; Peres, S.C. The Relationship Between System Effectiveness and Subjective Usability Scores Using the System Usability Scale. Int. J. Hum.–Comput. Interact. 2014, 30, 575–584. [Google Scholar] [CrossRef]
  49. Bangor, A.; Kortum, P.; Miller, J.A. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  50. Capeleti, B.S.; Santos, C.Q.; De Souza, J.I.; Freire, A.P. Usability Evaluation of a Brazilian Dam Safety Data Exploration Platform: A Consolidation of Results from User Tests and Heuristic Evaluation. In Human-Computer Interaction—INTERACT 2023; Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M., Eds.; Springer Nature Switzerland: Cham, Switzerland, 2023; Volume 14145, pp. 100–119. [Google Scholar] [CrossRef]
  51. Bugs, G.; Granell, C.; Fonts, O.; Huerta, J.; Painho, M. An Assessment of Public Participation GIS and Web 2.0 Technologies in Urban Planning Practice in Canela, Brazil. Cities 2010, 27, 172–181. [Google Scholar] [CrossRef]
  52. Słomska-Przech, K.; Panecki, T.; Pokojski, W. Heat Maps: Perfect Maps for Quick Reading? Comparing Usability of Heat Maps with Different Levels of Generalization. ISPRS Int. J. Geo-Inf. 2021, 10, 562. [Google Scholar] [CrossRef]
  53. Unrau, R.; Kray, C. Mining Map Interaction Semantics in Web-Based Geographic Information Systems (WebGIS) for Usability Analysis. AGILE GIScience Ser. 2021, 2, 1–11. [Google Scholar] [CrossRef]
  54. Unrau, R.; Kudekar, A.; Kray, C. Interaction Pattern Analysis for WebGIS Usability Evaluation. Trans. GIS 2022, 26, 3374–3388. [Google Scholar] [CrossRef]
  55. Unrau, R.; Kray, C. Enhancing Usability Evaluation of Web-Based Geographic Information Systems (WebGIS) with Visual Analytics. In Proceedings of the 11th International Conference on Geographic Information Science (GIScience 2021)—Part I. Leibniz International Proceedings in Informatics (LIPIcs); Schloss Dagstuhl—Leibniz-Zentrum für Informatik: Wadern, Germany, 2020; Volume 177, pp. 15:1–15:16. [Google Scholar] [CrossRef]
  56. Abraham, S.A. Usability Problems in GI Web Applications: A Lesson from Literature. AGILE GISci. Ser. 2021, 2, 1–7. [Google Scholar] [CrossRef]
  57. Nielsen, J.; Landauer, T.K. A Mathematical Model of the Finding of Usability Problems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems—CHI ’93, Amsterdam, The Netherlands, 24–29 April 1993; ACM Press: New York, NY, USA, 1993; pp. 206–213. [Google Scholar] [CrossRef]
  58. Mahatody, T.; Sagar, M.; Kolski, C. State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions. Int. J. Hum.–Comput. Interact. 2010, 26, 741–785. [Google Scholar] [CrossRef]
  59. Gossen, T.; Nitsche, M.; Nürnberger, A. Knowledge Journey: A Web Search Interface for Young Users. In Proceedings of the Symposium on Human-Computer Interaction and Information Retrieval, Cambridge, CA, USA, 4 October 2012; ACM: New York, NY, USA, 2012; pp. 1–10. [Google Scholar] [CrossRef]
  60. Ekşioğlu, M.; Kiris, E.; Çapar, B.; Selçuk, M.N.; Ouzeir, S. Heuristic Evaluation and Usability Testing: Case Study. In Internationalization, Design and Global Development; Rau, P.L.P., Ed.; Springer: Berlin/Heidelberg, Germany, 2011; Volume 6775, pp. 143–151. [Google Scholar] [CrossRef]
  61. McCloskey, M. Turn User Goals into Task Scenarios for Usability Testing. Nielsen Norman Group. Available online: https://www.nngroup.com/articles/task-scenarios-usability-testing/ (accessed on 1 July 2024).
  62. Brooke, J. SUS: A ‘quick and dirty’ usability scale. In Usability Evaluation in Industry; Jordan, P.W., Thomas, B., Weerdmeester, B.A., McClelland, I.L., Eds.; Taylor and Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
  63. Bangor, A.; Kortum, P.T.; Miller, J.T. An Empirical Evaluation of the System Usability Scale. Int. J. Hum.–Comput. Interact. 2008, 24, 574–594. [Google Scholar] [CrossRef]
  64. Król, K.; Kukulska-Kozieł, A.; Cegielska, K.; Salata, T.; Hernik, J. Turbulent Events Effects: Socioeconomic Changes in Southern Poland as Captured by the LSED Index. Sustainability 2023, 16, 38. [Google Scholar] [CrossRef]
  65. Nielsen, J. Usability Engineering; Academic Press, Inc.: Cambridge, MA, USA, 1993. [Google Scholar]
  66. Nielsen, J. Website Response Times. Nielsen Norman Group. Available online: https://www.nngroup.com/articles/website-response-times/ (accessed on 1 July 2024).
  67. Król, K.; Sroka, W. Internet in the Middle of Nowhere: Performance of Geoportals in Rural Areas According to Core Web Vitals. ISPRS Int. J. Geo-Inf. 2023, 12, 484. [Google Scholar] [CrossRef]
  68. Quiñones, D.; Barraza, A.; Rojas, L. User Experience Heuristics for Geoportals. In Usability and User Experience; Ahram, T., Falcão, C., Eds.; AHFE International: Orlando, FL, USA, 2022; Volume 39. [Google Scholar] [CrossRef]
  69. Butkovic, A.; McArdle, G.; Bertolotto, M. A Framework to Measure User Experience of Geoportals. In Proceedings of the IECHCI2023, Erzurum, Turkiye, 23–25 November 2023; Volume 104, pp. 79–84. [Google Scholar]
  70. Ferrer, M.Á.; Aguirre, E.R.; Méndez, R.E.; Mediavilla, D.G.; Almonacid, N.J. UX Research: Investigación en experiencia de usuario para diseño de mapa interactivo con variables georreferenciadas en EMR. Rev. Espac. 2020, 41, 27–45. Available online: https://www.revistaespacios.com/a20v41n01/20410127.html (accessed on 1 July 2024).
  71. Bonastre, L.; Granollers, T. A set of heuristics for user experience evaluation in e-Commerce websites. In Proceedings of the Seventh International Conference on Advances in Computer-Human Interactions (ACHI 2014), Barcelona, Spain, 23–27 March 2014; pp. 27–34. [Google Scholar]
  72. Horbiński, T.; Cybulski, P.; Medyńska-Gulij, B. Graphic Design and Button Placement for Mobile Map Applications. Cartogr. J. 2020, 57, 196–208. [Google Scholar] [CrossRef]
  73. Kellenberger, B.; Iosifescu Enescu, I.; Nicola, R.; Iosifescu Enescu, C.M.; Panchaud, N.H.; Walt, R.; Hotea, M.; Piguet, A.; Hurni, L. The Wheel of Design: Assessing and Refining the Usability of Geoportals. Int. J. Cartogr. 2016, 2, 95–112. [Google Scholar] [CrossRef]
  74. Lallemand, C.; Koenig, V. Lab Testing Beyond Usability: Challenges and Recommendations for Assessing User Experiences. J. Usability Stud. 2017, 12, 133–154. Available online: https://orbilu.uni.lu/handle/10993/31420 (accessed on 1 July 2024).
  75. Dickinger, A.; Stangl, B. Website Performance and Behavioral Consequences: A Formative Measurement Approach. J. Bus. Res. 2013, 66, 771–777. [Google Scholar] [CrossRef]
  76. Green, D.T.; Pearson, J.M. Integrating Website Usability with the Electronic Commerce Acceptance Model. Behav. Inf. Technol. 2011, 30, 181–199. [Google Scholar] [CrossRef]
Figure 1. Overview of geoportals used in Tomice Municipality from 2012 to 2024 (accessed on 1 July 2024). Source: original work.
Figure 1. Overview of geoportals used in Tomice Municipality from 2012 to 2024 (accessed on 1 July 2024). Source: original work.
Ijgi 13 00307 g001
Figure 2. Method for calculating the aggregate SUS score. Source: original work based on [62].
Figure 2. Method for calculating the aggregate SUS score. Source: original work based on [62].
Ijgi 13 00307 g002
Figure 3. (a) Performance results (value of the Performance index); (b) aggregate usability result according to GOQ. G1: https://www.tomice.pl/mpzp; G2: http://www.mpzp.tomice.pl; G3: https://sip.gison.pl/tomice, (accessed on 1 July 2024). Source: original work.
Figure 3. (a) Performance results (value of the Performance index); (b) aggregate usability result according to GOQ. G1: https://www.tomice.pl/mpzp; G2: http://www.mpzp.tomice.pl; G3: https://sip.gison.pl/tomice, (accessed on 1 July 2024). Source: original work.
Ijgi 13 00307 g003
Table 1. Technical specifications of the tested geoportals.
Table 1. Technical specifications of the tested geoportals.
Design AttributesG1G2G3
W3C specificationXHTML 1.0 TransitionalHTML 5HTML 5
Software frameworkJavaScript, Maphilight jQuery pluginGeoxa Viewer, Geoxa Map SerwerLeaflet, OpenStreetMap
Base maprastervectorvector
Design and implementationUniversity of Agriculture in KrakówCGIS GeoxaGISON
Table 2. SUS evaluation, mobile.
Table 2. SUS evaluation, mobile.
SUS QuestionType of VariableMobile Measurement
G1AMMOMG2AMMOMG3AMMOM
1. I think I would like to use this system frequentlyBTB433.133443.143574.154
2. I consider the system unnecessarily complicatedSTB292.112332.422251.822
3. I think the system is easy to useBTB614.444574.144604.344
4. I think I should need technical assistance to be able to use the systemSTB171.211211.511201.411
5. I find various functions of the system to be well integratedBTB493.544453.243553.944
6. I think the system has too many inconsistenciesSTB282.022322.322292.122
7. I imagine most people would learn how to use the system very quicklyBTB664.755564.044584.144
8. I think the system is very inconvenient to useSTB261.911.5271.922211.511
9. I felt confident using the systemBTB574.144483.443.5574.154
10. I had to learn a lot before I could start using the systemSTB251.812312.222261.922
BTB—bigger-the-better, STB—smaller-the-better. AM (arithmetic mean); (MO) mode; M (median); G1: https://www.tomice.pl/mpzp; G2: http://www.mpzp.tomice.pl; G3: https://sip.gison.pl/tomice, (accessed on 1 July 2024).
Table 3. SUS evaluation, desktop.
Table 3. SUS evaluation, desktop.
SUS QuestionType of VariableDesktop Mode Measurements
G1AMMOMG2AMMOMG3AMMOM
1. I think I would like to use this system frequentlyBTB423.033483.443.5584.144
2. I consider the system unnecessarily complicatedSTB271.911.5292.112231.622
3. I think the system is easy to useBTB634.554.5594.244624.454.5
4. I think I should need technical assistance to be able to use the systemSTB161.111201.411181.311
5. I find various functions of the system to be well integratedBTB483.443.5523.754594.244
6. I think the system has too many inconsistenciesSTB282.022261.922241.722
7. I imagine most people would learn how to use the system very quicklyBTB614.454.5564.054584.144
8. I think the system is very inconvenient to useSTB342.412.5261.912221.611.5
9. I felt confident using the systemBTB584.144574.144624.454.5
10. I had to learn a lot before I could start using the systemSTB231.611271.912261.911
BTB—bigger-the-better, STB—smaller-the-better. AM (arithmetic mean); (MO) mode; M (median); G1: https://www.tomice.pl/mpzp; G2: http://www.mpzp.tomice.pl; G3: https://sip.gison.pl/tomice, (accessed on 1 July 2024).
Table 4. Ad-hoc performance results *.
Table 4. Ad-hoc performance results *.
GeoportalGTmetrixPingdomPageSpeed InsightsGiftOfSpeed
Performance (%)Structure (%)Performance (%)Performance (%)Speed Score (%)
G198909110098
G26974839374
G34952715552
0–49 (Very poor), 50–70 (Poor), 71–89 (Average), 90–100 (Good). The scale is based on [67]; GTmetrix: https://gtmetrix.com/; Pingdom Website Speed Test: https://tools.pingdom.com/; PageSpeed Insights: https://pagespeed.web.dev/; and GiftOfSpeed: https://www.giftofspeed.com, (accessed on 5 August 2024). * Unit performance measurement ‘here and now’ (ad-hoc); desktop mode. Report generated 5 August 2024.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Król, K. Retrospective Analysis of Municipal Geoportal Usability in the Context of the Evolution of Online Data Presentation Techniques. ISPRS Int. J. Geo-Inf. 2024, 13, 307. https://doi.org/10.3390/ijgi13090307

AMA Style

Król K. Retrospective Analysis of Municipal Geoportal Usability in the Context of the Evolution of Online Data Presentation Techniques. ISPRS International Journal of Geo-Information. 2024; 13(9):307. https://doi.org/10.3390/ijgi13090307

Chicago/Turabian Style

Król, Karol. 2024. "Retrospective Analysis of Municipal Geoportal Usability in the Context of the Evolution of Online Data Presentation Techniques" ISPRS International Journal of Geo-Information 13, no. 9: 307. https://doi.org/10.3390/ijgi13090307

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop