[go: up one dir, main page]

0% found this document useful (0 votes)
29 views27 pages

Digitalization

The paper discusses the importance of automated web content accessibility evaluation to enhance social inclusion for individuals with disabilities. It identifies limitations in existing accessibility evaluation tools, such as inadequate guideline selection and lack of user feedback, and proposes a new framework that addresses these issues to improve evaluation effectiveness. The proposed framework aims to assist web developers and accessibility engineers in creating more accessible websites by incorporating a broader range of evaluation criteria and methodologies.

Uploaded by

lyn montefalco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views27 pages

Digitalization

The paper discusses the importance of automated web content accessibility evaluation to enhance social inclusion for individuals with disabilities. It identifies limitations in existing accessibility evaluation tools, such as inadequate guideline selection and lack of user feedback, and proposes a new framework that addresses these issues to improve evaluation effectiveness. The proposed framework aims to assist web developers and accessibility engineers in creating more accessible websites by incorporating a broader range of evaluation criteria and methodologies.

Uploaded by

lyn montefalco
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Universal Access in the Information Society

https://doi.org/10.1007/s10209-024-01164-5

LONG PAPER

An inclusive framework for automated web content accessibility


evaluation
Jinat Ara1 · Cecilia Sik-Lanyi1 · Arpad Kelemen2 · Tibor Guzsvinecz3

Accepted: 14 October 2024


© The Author(s) 2024

Abstract
Since 1996, web accessibility evaluation has been an important aspect of web development to increase social inclusion
for people with special needs. Several web accessibility evaluation and testing tools have been developed to automatically
evaluate websites in order to identify barriers for people with disabilities. The developed tools are significant since their
aim is to effectively represent accessibility issues. However, a few issues with website accessibility cannot be identified
through the existing accessibility testing tools due to several limitations, including (i) inappropriate guideline selection,
(ii) ambiguities in guideline understanding, (iii) avoiding user and expert suggestions as evaluation criteria, (iv) limited
consideration of semantic perspectives, and (iv) unwillingness to incorporate the updated engineering methods. There-
fore, reported results may be unclear and inappropriate for some users. Such limitations are critical factors that reduce
the effectiveness of the developed tools. These issues cause unwillingness to use a particular tool or possible adoption of
other tools. In this situation, we must identify which aspects are important to incorporate in the development in order to
make the developed solution more effective as it allows users to make their website accessible to people with disabilities.
In this paper, first, we present a literature review of different existing solutions for web accessibility testing to identify
their challenges and limitations. Following the reported findings, we propose an automated web accessibility evaluation
framework addressing several accessibility aspects to improve the evaluation results by mitigating the limitations of
existing solutions. The proposed framework is validated by comparing it with existing automated solutions considering
their functional properties. The proposed accessibility framework might be beneficial for web developers, accessibility
engineers, and other practitioners to incorporate into their development and research.

Keywords Automated testing · Acceptability · Accessibility testing · Effectiveness · Fairness · Hybrid testing ·
Reliability · Textual and non-textual complexity · Web accessibility

Abbreviations
Jinat Ara ACT Accessibility Conformance Testing
jinat.ara@mik.uni-pannon.hu AI Artificial Intelligence
Cecilia Sik-Lanyi API Application Programming Interface
lanyi.cecilia@mik.uni-pannon.hu CSS Cascading Style Sheets
Arpad Kelemen CVD Color Vision Deficiency
kelemen@umaryland.edu DOM Document Object Model
Tibor Guzsvinecz EDBA Evaluator Decision Based Assignment
guzsvinecz.tibor@zek.uni-pannon.hu HTML Hyper Text Markup Language
1
Department of Electrical Engineering and Information NCAM National Center for Accessible Media
Systems, University of Pannonia, Egyetem u. 10, NLP Natural Language Processing
Veszprem 8200, Hungary UI User Interface
2
Department of Organizational Systems and Adult Health, VCS Visual Complexity Score
University of Maryland Baltimore, 655 W. Lombard st W3C World Wide Web Consortium
#455B, Baltimore, MD 21201, USA WCAG Web Content Accessibility Guidelines
3
Department of Information Technology and Its Applications,
University of Pannonia, Zalaegerszeg, Hungary

13
Universal Access in the Information Society

1 Introduction that most of the tested websites had accessibility issues that
should cause further concern.
Nowadays, the web has become the most important plat- Addressing these particular issues, in the last few decades,
form to get information for people with disabilities (e.g., legislations has been ratified; for instance, Sect. 508 of the
blindness, cognitive, vision impairment, hearing difficulties, Rehabilitation Act was initiated by the United States in 2006
etc.). With the web, people have an opportunity to access a concerning the human rights of people with disabilities [8].
wide array of information (e.g., news, healthcare informa- In 2010, the European Union accepted these guidelines and
tion, educational resources, banking information, etc.) and declared that to ensure accessibility of online platforms
do several activities (e.g., online transactions, shopping, including the web, imposing these guidelines is mandatory
doctor appointments, e-health services, etc.) that would be to improve social inclusion. Nowadays, the World Wide
difficult without proper accessibility. Web Consortium (W3C) has taken several steps to improve
Generally, accessibility is the ease of use of any services, the accessibility of web platforms by initiating several pro-
tools, and environments in terms of user capability [1]. In tocols, recommendations, and guidelines about the acces-
the context of web, accessibility is the ability to ensure con- sibility of software and application tools, and web content
sistent web navigation, prototype identification, information to make them accessible to all. The W3C published a set of
extraction, and execution of website functionalities without standards called the ‘Web Content Accessibility Guidelines
experiencing any difficulties [2]. For example, the naviga- (WCAG)’ which is considered the most effective guideline
tion path of websites should be equal for people with dis- for web designers and developers to improve access oppor-
abilities and without disabilities. Recently, several statistics tunities of web content or web platform [9, 10].
have confirmed that a significant number of users with some Currently, the most effective technique for improving
sort of disabilities are actively participating online and this such an inaccessible scenario is the detection of accessi-
number is steadily increasing [3]. About 81% of users with bility issues using web accessibility evaluation tools [11].
various disabilities still experience difficulty, and some- Such tools help to evaluate and identify the issues associ-
times it’s quite impossible for them to effectively perceive ated with accessibility in terms of accessibility guidelines
the content [4]. A few recent studies showed that the major- and provide additional information about how to address
ity of websites even fail to maintain the basic accessibility the detected issues for future improvement. To assist web
requirements or minimum standards of accessibility [5, 6]. practitioners (e.g., web designers, web developers, etc.) and
As a result, people with disabilities experience several diffi- end users, several automated and semi-automated tools have
culties with web access. For example, web content informa- been designed and implemented for website accessibility
tion might be difficult to read and understand the meaning, evaluation. For example, Schiavone and Paterno (2015)
placement of the user interface elements might be difficult [12] proposed an accessibility evaluation tool implementing
to identify or remember, and some interactive designs (drop- the updated version of web content accessibility guidelines
down menu, sub tasks, landing page, etc.) might make the to address and improve the shortcomings of current evalu-
content partially or completely inaccessible. Besides, as the ation tools. A few authors proposed automatic evaluation
accessibility problems are distinct according to the type of tools for dynamic webpage evaluation [13, 14]. Some tools
disability, thus the problems or difficulties might vary from have been developed for personalized web accessibility
person to person and in different situations. Among sev- evaluation for specific disabilities, such as vision impair-
eral scenarios of difficulties, more particularly people with ment [15, 16]. Details about several accessibility evaluation
vision problems have difficulties understanding content that tools can be found in Sect. 2. Though these accessibility
is written in very small font and specific theme (italic, and evaluation tools are effective in investigating the websites,
bolded). People with color blindness have difficulties recog- unfortunately, most of them have several issues that lead to
nizing specific colors, and people with cognitive difficulties misleading the evaluation process and in the process, reduce
have issues understanding the meaning of some complex or the acceptability and reliability of the evaluated result. For
advanced words, notations, abbreviations, and alerts. Also, example, it is difficult to determine which guidelines are
people with motion difficulties have issues with scrolling implemented and which cannot be implemented. It is also
and pointing dropdown menu. In that case, people with dis- unclear which guidelines necessitate additional testing, such
abilities are forced to spend more time on the website to find as user or expert testing, for further validation. Besides, it is
their required information than people without disabilities. difficult to determine the website’s accessibility percentage
In this context, in our preliminary investigation, we vali- for each disability, and the assessment terminologies are not
dated a set of 15 webpages from different parts of the world clearly defined which makes it difficult to understand how
considering different criteria to understand the accessibility the overall score was calculated.
of the selected web pages [7]. Our investigation confirmed

13
Universal Access in the Information Society

According to [17], considering accessibility guidelines representing the computed results as reliable, acceptable,
alone is not enough; additional accessibility requirements and fair. Also, compared to the existing system, the aspects
should be concurrently considered. Kaur and Vijay claimed addressed in the proposed system are not considered in the
that only 50% of the accessibility problems with web con- existing system which makes the proposed system distinc-
tent are addressed by the web content accessibility guide- tive. Besides, the proposed framework will be helpful for
lines [18]. As a result, additional specific issues cannot be web practitioners and web researchers to understand the
resolved by simply adhering to web content accessibility web evaluation process.
guidelines. Therefore, to improve website effectiveness, Moreover, the first objective of this work is to observe
efficiency, and satisfaction, accessibility guideline incorpo- the existing available web accessibility evaluation tech-
ration should be together with user and expert suggestions niques or solutions to identify their limitations to answer
as an additional evaluation criterion. Indeed, accessibil- our RQ. Based on the findings of the addressed research
ity guidelines are written in natural language format that question, the second objective is to propose an accessibil-
is very general and cumbersome to be applied by design- ity testing framework that focuses on several aspects for
ers, developers, and web practitioners which puts pressure enhancing the automatic accessibility evaluation process
on and delays the design and development process. Some against the observed drawbacks. The contribution of our
recent research accepted this limitation and addressed some research aligns with:
additional consequences of this limitation. For example,
as WCAG remains subjective, it could be interpreted and ● Evaluating the effectiveness of the recently developed
implemented in several ways depending on the designers’ accessibility testing and evaluation systems/tools;
and developers’ individual preferences. As website design- ● Determining the challenges and limitations of the cur-
ers and developers are not accessibility experts and have a rent accessibility testing processes/methods;
limited understanding of accessibility, some guidelines and ● Presenting an extensive accessibility testing framework
requirements may be applied differently depending on the considering a wide array of aspects to mitigate the in-
scenario and the context. Besides, a certain level of knowl- vestigated issues and improve the effectiveness of the
edge is required to understand the natural language format- accessibility testing results.
ted guidelines and user perspectives. Due to these assorted
issues, web development and evaluation processes could be This paper is structured as follows: Sect. 2 provides the
deceptive. literature review of past studies. Section 3 provides the
Some studies [19, 20] claimed that most accessibil- research methodology and discusses multiple accessibil-
ity evaluation tools have limited consideration of seman- ity evaluation techniques or solutions illustrating how they
tic aspects and developments are not following advanced evaluate the accessibility issues, their strength, challenges,
engineering techniques. Recently, researchers conducted and drawbacks. Also, Sect. 3 explains our proposed frame-
an extensive investigation on the improvement of the effec- work with its main benefits and how it can address the exist-
tiveness of website evaluation process, specifically the ing research challenges and drawbacks. Section 4 provides
automated web evaluation process to assist web practitio- the validation results of the proposed framework. Follow-
ners in understanding the accessibility status of their devel- ing that Sect. 5 concludes the discussion in detail. Finally,
oped web-based applications [21]. Therefore, in this paper, Sect. 6 outlines the conclusions of this study.
we investigate several proposed solutions for accessibility
evaluation to identify frequently appeared issues that lim-
ited the effectiveness of their developments. According to 2 Literature review
our observation and reported findings, we propose a frame-
work considering several aspects that are crucial to include This study aims to evaluate the existing literature related
in the development process to limit the existing issues of to web content accessibility evaluation process in order to
current automated web accessibility evaluation tools. The identify their challenges and limitations by conducting a
proposed accessibility framework demonstrates five aspects literature review. This phase is divided into three steps: (i)
considering several criteria to improve ambiguities, such as planning the literature review, (ii) conducting the literature
accessibility guidelines, user and expert suggestion, guide- review, and (iii) reporting the findings.
line simplification, automated testing, and issue identifica-
tion and visualization. As the proposed approach focuses 2.1 Planning the literature review
on a wide array of aspects including guidelines, additional
criteria, evaluation result computation, visualization, etc., The main sub-activities related to the planning of the litera-
it could be useful in facilitating the evaluation process and ture review are (i) specification of the research question, (ii)

13
Universal Access in the Information Society

formulation of the search string, and (iii) database selection. 2.2 Conducting the literature review
All these sub-activities are described below.
The aim of this phase is to describe review activities by
2.1.1 Research questions explaining (i) literature extraction, (ii) inclusion and exclu-
sion implication, and (iii) data extraction and quality assess-
The formulation of the research question is the initial stage ment. These sub-activities are described in detail in the
of a literature selection. As a result, we developed the following subsections.
research question based on our focused research area:
2.2.1 Literature extraction
RQ1 What are the challenges and drawbacks of the existing
web accessibility testing process? To extract past literature, we examined the search terms
in six different databases. Scholarly committees approve
these databases for scientific publication. Almost all of the
2.1.2 Search strings literature is freely accessible. These databases extract rel-
evant literature based on search strings using their sophisti-
In order to answer the research question, it is crucial to iden- cated search algorithms and semantic technologies. In total,
tify and evaluate the available web accessibility evaluation 140 records were found from six databases in the period
framework that contributes to improving the accessibility of from 2016 to November 2023 (IEEE Xplore: 15, Google
webpages. To achieve this aim, preliminary research activi- Scholar: 53, ACM digital library: 6, Springer: 7, Scopus:
ties such as extracting literature from the scientific database 25, and Web of Science: 34,). The search result is shown in
are the first important tasks [22]. Therefore, we defined a Fig. 1 by taking into account the number of papers that were
list of keywords based on our research question regarding chosen from each database by the search query. Nonethe-
web accessibility framework or web accessibility improve- less, compared to other databases, Google Scholar, Web of
ment process in order to choose the relevant search strings. Science, and Scopus offer a greater number of literatures.
We manually searched several scientific databases using the Among 140 papers, we have chosen the most pertinent
developed list of keywords, and we then fine-tuned it based papers needed for this evaluation using inclusion and exclu-
on how well the results aligned with the goal of the study. sion criteria (explained in the following section).
The following is the final list of keywords that were
chosen and represented by a Boolean operation: (“web 2.2.2 Inclusion and exclusion criteria
accessibility evaluation framework” OR “techniques” OR
“methods” OR “tools”) AND (“Automated web acces- The most appropriate studies for this research were selected
sibility evaluation criteria” OR “statistics”) AND (“Web from the obtained literature after the evaluation. The litera-
accessibility validation by user” OR “expert”) AND (“Web ture that didn’t fit the review’s inclusion requirements was
accessibility evaluation for people with disabilities” OR eliminated. The following criteria were used for inclusion:
“impairments” AND “Accessibility issues with web”) AND publications that are written in English, published between
(“Accessibility validation”). 2016 and 2023 (November) in peer-reviewed conferences
or journals (not books), and that discuss the advancement or
2.1.3 Database selection growth of web accessibility evaluation framework.
The purpose of the exclusion process was to remove
Database selection is essential for identifying the most recent publications from this review. The exclusion criteria were
and pertinent publications. There are many scientific data- the following: duplicate papers, papers written in languages
bases available, so choosing the right one is essential. Here, other than English, papers that are not directly related or
we have chosen six well-known databases that offer high- irrelevant, papers that are not publicly available, and non-
quality scientific publications and provide multidisciplinary research papers like posters, letters, theses, and editori-
research work including information science which cat- als. After applying inclusion and exclusion criteria to 140
egorised our research objectives. These databases extracted papers, the following observation was made: forty-four (44)
the most relevant literature based on the user’s interests papers were duplicates and were not freely accessible or
using sophisticated search algorithms. Six databases used downloadable, ten (10) articles were literature reviews or
in this literature selection process are IEEE Xplore, Google non-English papers, forty-seven (47) articles were not appro-
Scholar, ACM digital library, Springer, Scopus, and Web of priate for our research objective. For example, they did not
Science directories. focus on the web accessibility framework or web accessibil-
ity improvement process. Only few articles considered the

13
Universal Access in the Information Society

Fig. 1 Number of selected studies


per database

Fig. 2 PRISMA flow diagram for


literature selection

guidelines and their effectiveness, and few were focusing on selection process has been performed through the Preferred
the improvement suggestions and six (6) articles were non Reporting Items for Systematic Reviews and Meta-Analyses
technical articles. In total, we excluded 107 papers from the (PRISMA) technique [23]. Figure 2 shows the flowchart of
preliminary screening. Finally, 33 articles met all inclusion the article selection process based on PRISMA. After final-
criteria and were eligible for this study. The entire literature izing the related literature, we analyzed the selected articles

13
Universal Access in the Information Society

considering our stated research question and reported the 3 Methodology


findings in Sect. 3 to answer our research question.
3.1 Reporting the findings
2.2.3 Data extraction and quality analysis
In this section, our prime aim is to describe our findings
We did this analysis based on the eligible 33 papers. Quality from the selected literature in the context of our stated
assessment and data extraction are crucial for identifying research question.
the most relevant paper with the research aim. This method
was used in a number of previous literature reviews for the RQ1: What are the challenges and drawbacks of the existing
main assessment of the chosen research. Consequently, in web accessibility testing process?
order to find high-quality related studies, finish the paper
reading process, and respond to our research questions, we To answer the addressed research question, we analyzed 23
adhered to several assessment criteria. The assessment cri- selected studies. Form these studies, it can be concluded
teria for the evaluation of certain studies are described in that, accessibility testing is the process of validating acces-
Table 1. sibility status of online content (e.g., websites/webpages)
We set the score to either 0 or 1 for each question. A paper considering the requirements of people with disabilities
receives a score of 1 for each positive response. The score (e.g., vision impairment, cognitive impairment, motion dif-
is zero if it is not pertinent to the evaluation questions. The ficulty, etc.) [33]. After evaluating 23 selected studies, we
extra points for the Q1 indexed journal are + 0.50. Likewise, provided information related to their research area, evalua-
the additional points are + 0.40, + 0.30, and + 0.20 for the tion type, and categorical information in Table 3. This table
Q2, Q3, and Q4 indexed journals, respectively. Equations 1 indicates that all of the selected studies are from the web
and 2 were utilized to determine the final score and the nor- accessibility domain. In terms of evaluation type, all the
malization score which was used to estimate the standard of reviewed studies are grouped into two groups of studies.
each selected study. The first group of studies deals with how to test web acces-
 sibility features automatically without human involvement,
Score = (QA1 + QA2 + QA3 + additional points) (1) which is also called Automated Testing. The second group
of studies deals with how to incorporate automated testing
Score − min (Score) and human evaluation (performed by hiring people includ-
Normalization = (2) ing experts and users), also known as Hybrid Evaluation.
max (Score) − min (Score)
Furthermore, in terms of category, all the studies focusing
Following the quality analysis, we only take into account on automated testing proposed three categories of studies
the studies that obtained α >= 0.4 normalized ratings on (Declarative model, Ontological model, and Algorithmic
at least three assessment questions. However, based on the evaluation), and studies focusing on hybrid evaluation
results of the quality assessment criteria, ten (10) of the mostly proposed two categories of studies (Crowdsourcing
thirty-three (33) selected studies were eliminated from this system and Heuristic Approaches).
review (as indicated in PRISMA diagram). The results of
the 23 qualifying papers’ quality assessments are shown in 3.1.1 Automated testing
Table 2 for this review.
In the context of web accessibility, automated testing refers
to the validation of accessibility features of website content
through computer programs against accessibility guidelines.
In other words, automatic testing is a process of automati-
cally executing a set of tasks to validate a set of patterns of
Table 1 Quality assessment questionnaire of selected studies websites. The importance of automated accessibility testing
Questionnaire for analysis Options outputs has increased in recent times as it reduces testing time, min-
QA.1: Is the paper a journal article or (+ 1) Yes/ If yes, rank imizes the associated cost, and makes the testing process
indexed in SJR or JCR or conference (+ 0) No them
faster than other testing processes [24]. Besides, it allows
paper?
QA.2: Is web accessibility evaluation (+ 1) Yes/ If yes, what
testing a wide array of websites without having any difficul-
method or process proposed in the paper? (+ 0) No method is ties. Focusing on these advantages, several research works
presented? have been conducted to develop automated accessibility
QA.3: Does the paper have significant (+ 1) Yes/ If yes, what testing tools or methods in the context of website evaluation
findings for improvement of accessibility (+ 0) No are the that are classified into three categories: declarative model,
evaluation process? findings?

13
Universal Access in the Information Society

Table 2 Quality assessment result of the selected studies


No References Type of paper Quality assessment Normalization (min_score = 1.0;
parameter max_score = 3.50)
Journal Conference QA.1 QA.2 QA.3 points score Quality
01 Hilera and Timbi-Sisalima [1] ✗ ✓ 1 1 1 0.0 3.0 0.8
02 Giraud et al. [2] ✓ ✗ 1 1 1 0.50 3.50 1.0
03 Mazalu and Cechich [10] ✓ ✗ 1 1 1 0.20 3.20 0.8
04 Acosta-Vargas et al. [3] ✓ ✗ 1 1 1 0.50 3.50 1.0
05 Li et al. [4] ✗ ✓ 1 1 1 0.0 3.0 0.8
06 Rashida et al. [6] ✓ ✗ 1 1 1 0.30 3.30 0.9
07 Alsaeedi [11] ✓ ✗ 1 1 0 0.40 2.40 0.5
08 Martins et al. [13] ✓ ✗ 1 1 1 0.50 3.50 1.0
09 Bonacin et al. [15] ✓ ✗ 1 1 1 0.40 3.40 0.9
10 Michailidou et al. [16] ✓ ✗ 1 1 1 0.50 3.50 1.0
11 Robal et al. [17] ✗ ✓ 1 1 0 0.0 2.0 0.4
12 Duarte et al. [19] ✗ ✓ 1 1 0 0.0 2.0 0.4
13 Giovanna et al. [21] ✓ ✗ 1 1 1 0.40 3.40 0.9
14 Jens Pelzetter [22] ✗ ✓ 1 1 1 0.0 3.0 0.8
15 Mohamad et al. [24] ✗ ✓ 1 1 0 0.0 2.0 0.4
16 Boyalakuntla et al. [25] ✗ ✓ 1 1 1 0.0 3.0 0.8
17 Shrestha [26] ✗ ✓ 1 0 1 0.0 2.0 0.4
18 Ingavélez-Guerra et al. [27] ✗ ✓ 1 1 1 0.0 3.0 0.8
19 Song et al. [28] ✓ ✗ 1 1 1 0.40 3.40 0.9
20 Li et al. [29] ✗ ✓ 1 1 0 0.0 2.0 0.4
21 Alahmadi [30] ✗ ✓ 1 1 0 0.0 2.0 0.4
22 Song et al. [31] ✗ ✓ 1 1 0 0.0 2.0 0.4
23 Hambley [32] ✗ ✓ 1 1 0 0.0 2.0 0.4

Table 3 Summary of the evaluated studies


References Research area Type Category
Hilera and Timbi-Sisalima [1] Web accessibility Automated Testing Ontological model
Rashida et al. [6] Web quality Automated Testing Algorithmic evaluation
Alsaeedi [11] Web accessibility Automated Testing Algorithmic evaluation
Bonacin et al. [15] Web accessibility Automated Testing Ontological model
Michailidou et al. [16] Visual accessibility Automated Testing Declarative model
Robal et al. [17] Web usability Automated Testing Ontological model
Duarte et al. [19] Web accessibility Automated Testing Algorithmic evaluation
Jens Pelzetter [22] Web accessibility Automated Testing Declarative model
Boyalakuntla et al. [25] Web accessibility Automated Testing Declarative model
Shrestha [26] Image Accessibility Automated Testing Declarative model
Ingavélez-Guerra et al. [27] E-learning platform Accessibility Automated Testing Ontological model
Giraud et al. [2] Web accessibility and usability Hybrid evaluation Heuristic Approach
Mazalu and Cechich [10] Web accessibility Hybrid evaluation Heuristic Approach
Acosta-Vargas et al. [3] Web accessibility Hybrid evaluation Crowdsourcing System
Li et al. [4] Web accessibility Hybrid evaluation Heuristic Approach
Martins et al. [13] Web accessibility and usability Hybrid evaluation Crowdsourcing System
Giovanna et al. [21] Web accessibility Hybrid evaluation Crowdsourcing system
Mohamad et al. [24] Web accessibility Hybrid evaluation Crowdsourcing system
Song et al. [28] Web accessibility Hybrid evaluation Crowdsourcing system
Li et al. [29] Web accessibility Hybrid evaluation Crowdsourcing System
Alahmadi [30] Web accessibility Hybrid evaluation Crowdsourcing system
Song et al. [31] Web accessibility Hybrid evaluation Crowdsourcing system
Hambley [32] Web accessibility Hybrid evaluation Crowdsourcing system

13
Universal Access in the Information Society

ontological model, and algorithmic evaluation (as shown in the testing process, which reduces the effectiveness of the
Table 3). evaluated result. Also, implementing ACT rules is quite dif-
ficult as it requires resources and experience, which may not
3.1.1.1 Declarative model (DM) This section describes be convenient for practitioners.
the framework that is considered a declarative model for Others asserted that understanding visual complexity is
web content accessibility testing. Among 23 papers, four (4) an emerging requirement for web accessibility evaluation
presented a declarative model (representing 17% of the total though many of the automated tools don’t consider it due
literature) that emphasized an automated accessibility test- to its associated weaknesses or problems or difficulties. For
ing process. These studies were grouped into seven major example, as picture descriptions are either written manually
drawbacks, as presented in Table 4. or generated automatically, they may not be appropriate or
suitable to understand for people with disabilities. Espe-
Despite having several advantages of automated acces- cially, it might be difficult for those with vision impairment
sibility testing, a few researchers claimed that existing to interpret the content of an image owing to an improper
automated testing processes have several limitations. For image description. Considering this, Michailidou et al. [16]
example, most of the existing automated accessibility test- proposed an automated tool to assess the visual complex-
ing tools do not support browser plugins and need the instal- ity of the content and generate the Visual Complexity Score
lation of multiple packages, which makes the evaluation (VCS) based on common aspects of an HTML Document
process challenging and may discourage users from utiliz- Object Model (DOM) to predict and visualize the complex-
ing them. Boyalakuntla et al. [25] developed an automated ity of the web page in the form of a pixelated heat map.
accessibility evaluation tool to assist web accessibility test- Addressing the same issue, Raju Shrestha [26] proposed a
ing through the command line and browser plugin facili- neural network framework for an automatic evaluation of
ties to address these challenges. The tool supports WCAG image descriptions according to the National Center for
2.1 and WCAG 2.2 focusing on aria, color-contrast, Hyper Accessible Media (NCAM) principles. In order to increase
Text Markup Language (HTML) checking, and interaction- the effectiveness of the proposed framework, they also
related issues. It displays a list of errors as well as sugges- incorporated expert knowledge (people who understand
tions for how to repair them with a snippet of code. Although image accessibility) and the universal design process. Even
the proposed approach is effective, few issues are limiting though these two proposed systems can accurately predict
its effectiveness as they assess websites in terms of 16 suc- accessibility issues, especially for those who have visual
cess criteria of WCAG 2.1, and 2.2, even though additional impairments, a few issues reduce the effectiveness of the
success criteria must be implemented or verified to repre- suggested solution, such as evaluating images for visual
sent the full picture of the accessibility situation; they do not complexity, the proposed systems consider a small num-
compute an overall accessibility score, that might not accu- ber of assessment features, incorporate a small number of
rately reflect the accessibility situation of the tested website. guidelines and checkpoints, and do not compute the overall
Pelzetter [22] addressed how the vagueness of the acces- accessibility score to indicate the accessibility status of the
sibility requirements causes a number of anomalies in the evaluated component.
results of automated accessibility testing. They proposed
a declarative model to evaluate the accessibility status of 3.1.1.2 Ontological model (OM) The framework for web
the websites by incorporating small test sets considering content accessibility testing which is regarded as an onto-
the Accessibility Conformance Testing (ACT) rule set and logical model is explained in this section. Out of 23 publi-
ontology modelling. Even though the proposed system is cations, four (4) proposed ontological models emphasizing
capable, certain potential issues have been observed that automated accessibility testing procedures (representing
could restrict the effectiveness of the evaluation result. For
instance, ontology modelling introduces ambiguity during

Table 4 The four studies related to declarative model (DM), grouped by seven major challenges
References Identified drawbacks
Boyalakuntla et al. [25] DM1. Evaluate against a limited number of guidelines
Shrestha [26] DM2. Absence of accessibility score computation
Jens Pelzetter [22] DM3. Ontology modelling introduces ambiguity
DM4. Implement guidelines that are not a standard guideline
DM5. Lack of consideration of user and expert evaluation
Michailidou et al. [16] DM6. Limited number of assessment feature
Shrestha [26] DM7. Limited checkpoints considered

13
Universal Access in the Information Society

Table 5 The four studies related to ontological model (OM), grouped Table 6 The three studies related to algorithmic evaluation (AE),
by four major challenges grouped by four major challenges
References Identified drawbacks References Identified drawbacks
Bonacin et al. [15] OM1. Expert opinion/assessment is Rashida et al. [6] AE1. Limited number of assessment feature
Robal et al. [17] required Duarte et al. [19] AE2. No expert evaluation
Ingavélez-Guerra et al. [27] Abdullah Alsaeedi AE3. A limited number of tools have
Robal et al. [17] OM2. Update the ontology is challeng- [11] incorporated
ing and requires a level of expertise AE4. Missing possible web accessibility
OM3. Adding new guidelines is a violations and the corresponding error
complex and laborious task messages generation function
Hilera and Timbi-Sisalima OM4. Observed inconsistency in the
[1] knowledge base and database
is described in this section. Out of 23 studies, three (3) pre-
17% of the total literature). These studies were grouped into sented algorithmic evaluation with a focus on the automated
four major drawbacks, as presented in Table 5. accessibility testing procedure (representing 13% of the
total literature). These studies were grouped into four major
According to several studies, the process of color-inten- drawbacks, as presented in Table 6.
sive website design may alter how accessible a website
is for people who have color vision issues, such as color Recently, a few researchers and academics have
perception or differentiation. In order to increase the acces- expressed concerns about website quality, which needs extra
sibility and interaction of Color Vision Deficiency (CVD) attention to satisfy end users’ demands. They claimed that
persons with the web, Bonacin et al. [15] designed an ontol- website quality also gives an indication of how accessible
ogy-based framework for adaptive interface development. a website is. With this in mind, Rashida et al. [6] presented
Using this method, it is possible to identify the ideal recolor- an automatic evaluation process by putting forward three
ing interface for CVD users based on their personal prefer- algorithms for the evaluation of content information, load-
ences. Additionally, Robal et al. [17] addressed that the user ing time, and overall performance attributes that are usually
interface of websites should be developed with end-user disregarded in many approaches. Results from experiments
requirements in mind so that users can navigate the structure demonstrate the usefulness of the suggested automated tool.
easily and smoothly and understand the information being In addition to tackling the same problem, Alsaeedi [11]
shown to them. To ensure these aspects, they developed an presented an algorithmic evaluation framework that would
ontology-based automated evaluation of the website user incorporate different automated accessibility testing tools
interface (UI) to determine the accessibility and usability to evaluate the accessibility of websites. The framework
of the UI. enables the selection of sets of evaluation tools prior to the
Furthermore, to assess website accessibility, Hilera and test and allows comparison of the accessibility status of the
Timbi-Sisalima [1] designed a universal architecture that old and new versions of a given website.
focuses on web services and semantic web technologies. Furthermore, concerning the semanticity of web content,
They incorporated multiple evaluation tools and gener- Duarte et al. [19] reported that current automated web acces-
ated results according to the semantic similarity of multiple sibility testing tools are unable to assess rules and techniques
reports obtained by each tool. Similarly, Ingavélez-Guerra semantically and evaluate the web content incorrectly. To
et al. [27] provided a strategy based on ontology and knowl- mitigate this problem, they proposed an automated tool that
edge modelling that supports accessibility analysis and determines the similarity between content and its textual
evaluation of learning objects, highlighting the relevance description from the perspective of web content accessibil-
of knowledge representation about learning objects with a ity guidelines. The semantic similarity measurement per-
focus on WCAG. forms through the SCREW algorithm to measure similarity
These developments are effective, although there are a between a set of textual descriptions of web content. They
few difficulties that have made these improvements less suc- represent the accessibility of web content in terms of their
cessful such as frequent updating and adding new guidelines computed similarity score.
are arduous processes in ontology-based solutions that prac- Algorithmic evaluation is effective in automated acces-
titioners may not want to employ. Without a professional sibility evaluation, although the methodologies mentioned
review of the validated outcome, evaluation results may be above have a number of limitations that restrict their use-
deceptive, and the user may not find them fully acceptable. fulness and applicability. For example, assessment features
are very few and other features must be taken into account
3.1.1.3 Algorithmic evaluation (AE) The framework for in the evaluation process. To validate the algorithmic solu-
algorithmic evaluation of web content accessibility testing tion or evaluation result, a wide array of user and expert

13
Universal Access in the Information Society

interventions is required that will help to validate the evalu- 3.1.2.1 Crowdsourcing system (CS) The framework that
ated results and improve the acceptability to the end user. is thought of as a crowdsourcing method for testing the
accessibility of web content is described in this section.
3.1.2 Hybrid evaluation Nine (9) of the 23 studies (or 39% of the total literature) pre-
sented crowdsourcing systems with an emphasis on accessi-
The W3C has been offering a list of web accessibility test- bility testing procedures. As shown in Table 7, these studies
ing tools in recent years. Unfortunately, these tools are not can be categorized into nine main shortcomings.
frequently updated to use the most recent version of the
accessibility guidelines and are not able to keep up with Numerous studies have been conducted focusing on sev-
the latest technology [34]. For example, mobile browsing eral issues to improve the effectiveness of the crowdsourc-
is growing in popularity every day, but in some cases, due ing system. For example, evaluating some specific issues
to the technological design and development process, they (e.g., access, navigation, etc.) with disabilities, requires
are not usable or viewable on different devices and screen manual assessment (e.g., user and expert assessment). Non-
resolutions. Generally, accessibility evaluation of websites expert evaluation could result in inaccurate validation of
is a challenging task that demands the incorporation of sev- these concerns. Songet al. [28] developed a crowdsourced-
eral aspects (web features, requirements of people with dis- based web accessibility evaluation system merging user
abilities, expert opinion, etc.) to reduce the shortcomings and expert evaluators with an inferencing technique to pro-
of automated accessibility testing results. Therefore, these duce trustworthy and accurate web accessibility evaluation
kinds of problems might be omitted from the automated results to improve user and expert assessment processes.
accessibility evaluation process. Web designers may find accessibility issues and solutions
To avoid such scenarios, researchers suggested that with the help of the provided accessibility reports. Besides,
hybrid evaluation could be effective in mitigating such Mohamad et al. [24] proposed a hybrid approach to assess
issues to improve the accuracy of the evaluated result and websites’ accessibility incorporating accessibility knowl-
retain the fairness of the evaluation process as it is not only edge, automated tools assessment, and expert feedback to
a code-oriented evaluation, but also allows human inves- validate multiple webpages/websites. They employed a set
tigation. In other words, hybrid evaluation process is the of rules through a rule engine to do the inferencing process
way of evaluating websites in terms of automated, user, and and a decision support system to compute the accessibil-
expert evaluation. Moreover, it incorporates the user and ity evaluation report that makes the process effective. Fur-
expert requirements and suggestions to improve the effec- thermore, Li et al. [29] addressed that automated testing is
tiveness of the evaluation process. Several researchers have effective, though few checkpoints/success criteria of web
conducted hybrid evaluations of web accessibility which is content accessibility guidelines demand manual judgment.
divided into two categories: Crowdsourcing systems and However, in manual judgment/testing, the main challenges
Heuristic approaches (as shown in Fig. 2). are associated with the burdensome and excessive workload
for the evaluator. Addressing these challenges, the authors
of this paper proposed an advanced crowdsourcing-based

Table 7 The nine studies related to crowdsourcing systems (CS), grouped by nine major challenges
References Identified drawbacks
Song et al. [28] CS1. A limited number of checkpoints
Song et al. [28] CS2. Not time efficient/time-consuming
Mohamad et al. [24]
Li et al. [29]
Tahani Alahmadi [30]
Giovanna et al. [15]
Acosta-Vargas et al. [3]
Mohamad et al. [24] CS3. Absence of empirical validation
Li et al. [29] CS4. Cost-sensitive
Acosta-Vargas et al. [3]
Tahani Alahmadi [30] CS5. Difficult to perceive
Hambley [32]
Giovanna et al. [21] CS6. Limited number of evaluation tools have incorporated
Martins et al. [13]
Song et al. [31] CS7. Evaluation results could be inappropriate due to the lack of accessibility knowledge
CS8. A limited number of evaluation metrics might bias the performance
Hambley [32] CS9. Expert and user opinion/assessment is required

13
Universal Access in the Information Society

web accessibility evaluation process that is effective for through non-experts’ observation where all the associated
facilitating the manual testing process (e.g., user, and expert task was distributed through Golden Set Strategy and task
testing). The proposed technique configured and simplified completion time was allocated according to the Time-Based
the evaluation system focusing on the learning system, task Golden Set Strategy. The evaluation result depicts that the
assignment system, and task review process. evaluation time reduces to half of the expert evaluation time
Furthermore, addressing the accessibility and usability and improves the evaluation accuracy.
issues, Alahmadi [30] proposed a crowdsourcing system for In addition, Hambley [32] addressed that automated tools
web accessibility evaluation considering the subjective and have limited coverage of accessibility issues that reduce the
objective measurements. They incorporated several accessi- acceptance of the accessibility evaluation result. Therefore,
bility and usability criteria, automated systems, and human they proposed an accessibility testing system incorporating
incorporation to reduce the amount of effort and time, for pre-existing evaluation tools in terms of sampling, cluster-
interactivity on the webpage during the evaluation process. ing, and developer testing. The proposed approach is effec-
In order to provide flexible and open support for a variety tive in reducing the inaccuracy of the evaluation results.
of accessibility difficulties, Broccia et al. [21] presented Although these approaches are effective, some issues
a crowdsourcing system to evaluate website accessibil- reduce their effectiveness. For example, since only a small
ity including the results of automated accessibility testing subset of WCAG checkpoints are possible to evaluate using
and usability testing. They incorporated usability testing to these methodologies, the results may not accurately reflect
validate the result in a qualitative and quantitative manner. the accessibility perspective; it is challenging to implement
A heuristic method based on the user barrier computation because it takes a lot of effort, time, financial support, and
process was also proposed by Acosta-Vargas et al. [3] to empirical validation; a wide array of evaluation tools are
increase user satisfaction, productivity, security, and effec- available, though a few tools are possible to incorporate to
tiveness. The proposed approach was developed by incor- these systems that might alter the evaluation result depend-
porating UX Checker, evaluators (who are experts in web ing on the other evaluation tools taken into account; consid-
accessibility), and users with low vision. Another study by ering a limited number of evaluation metrics may bias the
Martins et al. [13] noted that there are several accessibility evaluated result; lack of accessible knowledge may skew
and usability problems associated with existing guidelines the evaluation process and results because the majority of
that are challenging to pinpoint using automated methods the evaluation process involves human evaluation; in some
alone. Thus, they suggested a hybrid process for evaluat- cases, they avoid expert evaluation due to several complexi-
ing web accessibility that combines automatic evaluation ties, that may reduce the effectiveness of the evaluation
(using the ACCESSWEB platform) and manual tasks (e.g., results.
user and expert evaluation).
In order to improve the task assignment process, the 3.1.2.2 Heuristic approach (HA) The methodology for
crowdsourcing system could be divided into tedious micro- heuristic evaluation of web content accessibility testing is
tasks that can be solved in parallel by workers. Most of the described in this section. Among 23 papers, three (3) studies
crowdsourcing systems validate websites using an expert included heuristic evaluations that focused on accessibility
who has a level of knowledge in accessibility. They addressed testing processes (representing 13% of the total literature).
that the validity and reliability of the evaluation result could These studies were grouped into four major drawbacks, as
be significantly improved by incorporating non-experts as presented in Table 8.
they will observe the websites from an end-user viewpoint.
Therefore, Song et al. [31] introduced a new crowdsourcing Heuristic approaches generally enable human inspec-
system implementing the Golden Set Strategy and Time- tion in various ways to evaluate web accessibility. Accord-
Based Golden Set Strategy. They incorporated automated ing to past research, several studies have concentrated on
testing for a few checkpoints and other checkpoints were heuristic approaches to evaluate the effectiveness and
evaluated manually. The manual evaluation was performed accessibility issues of the web. For example, Li et al. [4]
addressed that a certain number of checkpoints during the
Table 8 The three studies related to the heuristic approach (HA),
grouped by four major challenges
evaluation of web accessibility require human inspection,
References Identified Challenges such as volunteer participation or expert opinion. Due to
Li et al. [4] HA1. Not cost-efficient the lower level of expertise and inappropriate knowledge,
HA2. Limited number of evaluation the evaluation task might seem complicated and bring poor
checkpoints evaluation results. To address this issue, they proposed a
Giraud et al. [2] HA3. Limited number of assessment feature heuristic approach considering a task assignment strategy
Mazalu and Cechich HA4. Expert opinion/assessment required called Evaluator-Decision-Based Assignment (EDBA) to
[10]

13
Universal Access in the Information Society

Fig. 3 Number of reviewed stud-


ies on each identified limitations
or challenges

Fig. 4 The proposed automated


accessibility testing framework

enhance the selection of participants and experts by using participants with blindness. They found that eliminating
evaluators’ prior evaluation records and knowledge of their redundant information and information filtering enhances
areas of competence. website accessibility, user satisfaction, and navigation
Additionally, Giraud et al. [2] argued that adherence to performance.
accessibility standards does not guarantee that a website Furthermore, Mazalu and Cechich [10] highlighted
is fully accessible for users with impairments, particularly that it is important to consider both developer and end-
users with blindness. They pointed out that usability stan- user requirements to encourage accessibility support for
dards are crucial for making a website fully accessible to individuals with impairments. To evaluate the intelligent
users with all kinds of disabilities. One critical criterion to feature specifically for users with visual impairment, they
enhance the usability of websites is redundant information proposed a web accessibility assessment methodology that
filtering. They proposed a heuristic approach focusing on incorporates a multiagent system. The proposed system
redundant and irrelevant information filtering including was validated through several assessment tools and results.

13
Universal Access in the Information Society

Although these approaches are effective, several factors, the existing approaches that make the evaluation results
including a limited number of assessment features, guide- ineffective, which addresses a further concern of the web
line implementation, cost-associated difficulties, and expert researcher. In summary, addressing our research question
assessment process reduced the outcome of these proposed (RQ), the following drawbacks and challenges have been
systems. observed frequently in multiple developed solutions that
Referring to (Tables 1, 2, 3, 4 and 5), it can be concluded reduced their effectiveness:
that the existing solutions have several disadvantages that
make the developed techniques less effective. For example, ● Difficulties understanding the guidelines (WCAG) that
in automated evaluation, more specifically in the declarative are written in natural language format;
model, it was noted that consideration of a limited number ● Consideration of only a limited number of WCAG suc-
of guidelines and checkpoints, fewer assessment features, cess criteria;
and lack of consideration of user and expert evaluation for ● Insufficient attention is given to user behavior, user re-
accessibility assessment and validation are factors making quirements, and expert suggestions;
these processes less effective. Also, lack of attention has ● Challenges in mapping process among success criteria
been observed in most of the tools as to what samples or and websites features;
objects of the website they have evaluated and what criteria ● Semantic features of websites and related engineering
they have implied to assess those website objects. Besides, techniques are not given enough attention;
for the ontological model, the major limitations were found ● The process of accessibility evaluation and score visu-
associated with a lack of consideration of expert opinion/ alization is ambiguous, thus it is difficult to determine
assessment, ontology updating issues for adding new guide- which criteria have been looked into and which have
lines which are complex, laborious, and challenging tasks been skipped;
for the developer, and inconsistency in knowledge base and ● Terminologies used in assessments are ambiguous and
database that might alter the evaluated results. For the algo- do not accurately reflect their intended meaning.
rithmic evaluation process, a limited number of assessment
features, lack of user and expert evaluation, and limited sta- From the above findings, it could be depicted that the major-
tistics for accessibility error computation are the primary ity of the web accessibility testing processes (both auto-
factors to reduce their effectiveness. In contrast, in the con- mated and hybrid evaluation process) have several issues
text of hybrid evaluation, specifically for the crowdsourcing that hinder their effectiveness. Following several limitations
system, our findings reported that the crowdsourcing system of the existing web accessibility testing system, there is an
has several issues with user and expert assessment, consid- emerging need to develop an updated accessibility test-
eration of accessibility and usability criteria, task distribu- ing tool to mitigate the existing shortcomings of the cur-
tion, assessment time minimization, and cost reduction that rent tools. Therefore, in the following part of the paper,
might hamper the evaluation process and limit the advance- our objective is to propose an accessibility testing frame-
ment of the developed crowdsourcing system. Besides, a work considering the determined aspects that can facilitate
few drawbacks to heuristic approaches have been identi- to improvement of the limitations of available automated
fied, including a limited number of evaluation checkpoints accessibility testing systems and improve the effectiveness
and assessment features, cost-sensitiveness, and lack of an of the evaluation results. The proposed accessibility testing
expert assessment process. The majority of the proposed framework for automated evaluation is demonstrated in the
solutions only considered error identification or highlighted following section.
the violated guidelines. The effectiveness of the developed
methods was constrained by not taking into account the 3.2 Proposed automated accessibility testing
computation of the overall accessibility score. Most works Framework
focused on all types of disabilities. In some works, only per-
sons with vision impairments were taken into account. According to the outcome of the conducted literature review
Figure 3 presents the number of papers on each identi- that listed several existing shortcomings (in Sect. 3.1), our
fied limitation of the reviewed papers. This figure depicts observation is that the main aspects leading to incorrect
that the most frequent issues or challenges or limitations perception, encoding, and development of the accessibility
addressed in the majority of the studies are: lack of consid- evaluation tool are:
eration of user and expert criteria, most of the solutions are
not time efficient, and limited checkpoints and assessment ● Understanding difficulties of natural language formatted
feature consideration. The observation result of our research web content accessibility guidelines;
question concludes that a number of issues appeared in

13
Universal Access in the Information Society

● Limited consideration of user requirements and expert ● Incorporating all success criteria in the evaluation pro-
suggestions; cess to make the evaluation results more effective, while
● Lack of semantic concern. improving the fairness of the evaluated result;
● Incorporating user requirements/opinions with expert
These make the evaluation results less credible and less suggestions during the evaluation process as an addi-
acceptable. Most of the accessibility testing tools only tional evaluation criterion;
check a specific number of WCAG success criteria which is ● Incorporating separate complexity analysis algorithms
around 50% of the total guidelines. As a result, it restricted for textual feature, and non-textual feature analysis fo-
the evaluation process and the overall evaluation result cusing on semantic aspects to improve the effectiveness
might be inaccurate. As many web accessibility guide- of the evaluated result;
lines cannot be assessed automatically, they do not specify ● Categorizing the evaluated guidelines in terms of user
whether guidelines require user/expert testing or not. This evaluation and expert evaluation when the guideline is
could also be a cause of incorrect calculation of matrices not applicable for automatic evaluation;
and evaluation report formulation. Without incorporating ● Displaying the evaluation result with the overall acces-
the user’s requirements/opinions and expert suggestions sibility score along with specific accessibility scores for
during the development of accessibility testing tools, the each disability type.
evaluation process may overlook some crucial aspects and
inadvertently inflate the final accessibility score. Besides, Figure 4 shows the proposed automated web accessibil-
a lack of consideration of semantic aspects may reduce the ity evaluation framework. The proposed framework con-
effectiveness of the evaluated results. Therefore, to mini- siders several aspects related to appropriate accessibility
mize such issues, an accessibility aspects framework for guidelines selection, user requirements, and expert sug-
automated web accessibility testing considering the fol- gestions (as additional evaluation criteria) consideration,
lowing aspects could help the development procedure and and guideline knowledge simplification prior to the algo-
improve the evaluation process with accurate results: rithmic coded process that facilitates the appropriate acces-
sibility evaluation scores computation. Figure 5 depicts a
● Simplifying the updated web content accessibility use case diagram that explains the specifics of the acces-
guidelines to represent the guideline knowledge in the sibility assessment procedure carried out by the proposed
easiest and most effective manner;

Fig. 5 Use case diagram of the


proposed automated accessibility
testing framework

13
Universal Access in the Information Society

framework. All the aspects shown in Fig. 4 are discussed in tools is limited to incorporating few/specific guidelines,
the following sections. the evaluation process might not consider every aspect of
difficulties associated with disabilities. Sometimes, incor-
3.2.1 Aspects of web content accessibility guideline poration of every success criterion of guidelines is not
possible through automated means. Following this limita-
Several governments and organizations from different coun- tion, consideration of user opinion and expert suggestion
tries have presented various accessibility guidelines in recent might be a valuable resource for identifying other supple-
years. Referring to past studies [35, 36], the Web Content mentary requirements as an additional evaluation criterion
Accessibility Guidelines (WCAG) is the most sophisticated along with WCAG during the development of an automated
and most widely accepted set of guidelines. WCAG has accessibility testing tool.
several versions and it is the most used guideline by most Generally, user opinion refers to the expressed opinion of
of the existing testing tools. Its newer version (WCAG 2.2) users based on difficulties they have encountered during the
has more features/success criteria than its previous versions experimentation or web evaluation process [38]. Depend-
WCAG (1.0, 2.0, 2.1). It contains 13 guidelines with 87 suc- ing on the sort of experiment and the user opinion collec-
cess criteria that are distributed into three conformance lev- tion, user opinions may be collected in different forms. The
els: A, AA, and AAA where 33 success criteria are assigned most common and effective way is questionnaire-based
to level A, 24 success criteria are assigned to level AA and evaluation (questions asked to the user) to express their
30 success criteria are assigned to level AAA. The confor- opinion [39]. In the context of accessibility evaluation of
mance levels ensure the priority of the success criteria in websites, a few researchers mentioned additional aspects
terms of their importance, where A refers to what might be that are encountered frequently during website accessing
included based on the development/evaluation criteria, AA and introduce accessibility issues such as the necessity of
refers to what should be included and AAA refers to what webpage availability, manual text size adjustment availabil-
must be included. However, to make the evaluation process ity, manual color adjustment availability, necessity of user
effective and correct, it is crucial to incorporate all success information, and availability of textual and image CAPT-
criteria under three conformance levels. Therefore, in our CHA. Unexpectedly, web content accessibility guidelines,
proposed automated accessibility testing framework, we including WCAG, do not address these aspects. Therefore,
considered every success criterion of the updated web con- we prepared our questionnaire concerning these aspects
tent accessibility guideline (WCAG 2.2) that could facili- that help us understand the user’s perspective and obtain
tate to improve the overall evaluation result. An overview of their particular requirements regarding every single aspect.
WCAG 2.2 is shown in Fig. 6. Detailed information about Also, understanding user requirements might be helpful to
WCAG could be found in [37]. improve the overall accessibility evaluation process.
In the context of expert suggestion, it refers to the rec-
3.2.2 Aspects of user opinions and expert suggestions ommendation to improve the prototypes of some particular
aspects to avoid some unconsidered situations [40]. From
From the evaluation of the existing work (Sect. 3.1), it could the web content accessibility perspective, experts are the
be depicted that consideration of user opinions and expert people who have a thorough understanding of accessibil-
suggestions is the most common limitation in their work. ity standards as well as technical knowledge of website
As the development of automated web accessibility testing design and development process. Depending on their role,

Fig. 6 Overview of WCAG 2.2 (the latest version of WCAG)

13
Universal Access in the Information Society

accessibility experts may be web developers, web designers, are written in natural language format and there is no logi-
accessibility test specialists, UX/UI experts, and researchers cal representation of these guidelines, it is relatively diffi-
[41]. A few additional aspects such as word/sentence length, cult to understand and implement these guidelines during
specific font family, font size, and other factors require fixed web accessibility evaluation tool development [43]. To
determinators to make the website accessible. To determine understand these complex guidelines, adequate acces-
these aspects, we considered expert suggestions or perspec- sibility knowledge and high-level technical competence
tives as an additional factor, as they have more expertise, are required. In that regard, we considered the concept of
experience, and knowledge in making critical judgments. web content accessibility guidelines simplification that can
We interviewed five experts from the Electrical Engineer- help to represent the guideline simply and effectively. In
ing and Information Systems Department of the University the guideline simplification process, we categorized all the
of Pannonia, Hungary to provide insightful guidance based success criteria of web content accessibility guidelines into
on their knowledge and experience. Out of the five experts, eight criteria: guidelines, objects, attributes, components
three were professors with over 20 years of experience in type, requirements, conformance level, beneficiary type,
digital platform accessibility perspectives, and the other and evaluation type/phase as shown in Fig. 7. This classifi-
two were PhD students in their final year focusing on digital cation process might represent the simplified guideline more
inclusion and human-computer interaction with over 5 years systematically which will help to encode every guideline
of experience in accessibility aspects. From expert sugges- and perceive the website feature appropriately through the
tion, we considered 13 additional factors that are beyond developed accessibility evaluation algorithm. Also, guide-
the mentioned criteria in WCAG. These additional require- line modelling helps to encode each guideline semantically.
ments include proper loading time, page length, appropri-
ate number of internal/external links, number of images and 3.2.4 Aspects of automated testing
video content, accessible color pair, proper word length,
sentence length, paragraph length, text content length, font Automated accessibility testing methodologies proposed
size, font family, text pattern complexity (e.g., italic, bold), by several accessibility research groups such as Accessi-
and content types. By considering expert suggestions along bility Conformance Testing (ACT), Source code mining,
with WCAG, including any new elements they recommend, Application Programming Interface (API) based testing,
it might be possible to enhance the website’s evaluation and Ontology-based testing are prominent. However, these
results. approaches have several limitations when it comes to con-
After obtaining all the additional criteria from users and ducting accessibility testing. For example, for ACT rules’
expert suggestions/opinions, we incorporated these crite- implementation, maintaining a unique ID is necessary for
ria into our evaluation process or in our proposed frame- mapping web attributes to one or more WCAG success cri-
work along with WCAG as additional rules or guidelines to teria, which is challenging [44]. In source code mining, the
improve the entire accessibility evaluation process. analyzed code might not be well-structured to extract the
relevant regularities due to poor precision and recall and
3.2.3 Aspects of guideline simplification sometimes a large number of user involvement is required
[45]. Besides, API testing is effective but it does not allow
Guideline simplification is a part of text simplification as interaction with real user activity and proceeds only with
guidelines are written in natural language format. It is the a raw request [46]. In ontology-based evaluation, inconsis-
process of simplifying the existing guidelines to make the tency in the knowledge base and database might reduce the
guidelines more comprehensible for the users or associated effectiveness of the result [47].
authorities [42]. As web content accessibility guidelines

Fig. 7 A simplification process of web content accessibility guideline

13
Universal Access in the Information Society

Fig. 8 Work-flow diagram of algorithmic evaluation of accessibility testing

To overcome these difficulties, several studies concluded Not tested refers to those guidelines that require the user or
that algorithmic evaluation of web content is important, expert testing, because the software does not test it.
especially for accessibility testing of web platforms [1, 16].
Through the algorithmic evaluation process, it is possible to 3.2.4.1 Text complexity To determine the accessibility of
analyze website source code by incorporating every guide- webpage content in the context of textual complexity, the
line offered by the Web Accessibility Initiative. As WCAG potential solution is to derive a text complexity algorithm
derived most of the prototypes of a website including tex- that evaluates the HTML textual content feature through
tual and non-textual content and features, we incorporated Natural Language Processing (NLP) to analyze each ele-
two separate algorithms for complexity analysis of textual ment associated with textual aspects, validate their acces-
content and non-textual features that might improve the per- sibility and determine the associated complexity following
formance of the algorithm and provide a straight guideline semantic manner. The WCAG 2.2 states that 35 success
about algorithm specification. To incorporate the textual, criteria are associated with textual elements, which are
and non-textual algorithms, we considered the most effec- essential components to determining and ensuring acces-
tive process to parse the website’s HTML code and analyze sibility and reducing the complexity of webpage surfing.
its features using Artificial Intelligence (AI) techniques that The following list of elements are associated with textual
might be effective in assessing each web feature and validat- aspects that we incorporated in the text complexity algo-
ing the guidelines and other additional requirements (from rithm. Besides, for each addressed elements in this section,
user requirements, and expert suggestions) appropriately. web content accessibility guidelines (WCAG 2.0) and some
Figure 8 shows the workflow diagram of the algorithmic other rules or directions about the additional elements are
evaluation process for accessibility testing of a particular given in Table 10 (Appendix).
website. We consider this process is the most convenient
process of algorithmic evaluation validating every guideline
for every webpage feature and evaluating them in terms of [Images]: Image is one of the key elements of web con-
four assessment terminologies such as ‘Pass’, ‘Fail’, ‘Not tent that is used frequently to convey descriptive informa-
tested’, and ‘Not detected’ where Pass refers to those guide- tion to users. Ensuring proper titles and descriptions of all
lines that have been followed and successfully implemented; the images on a webpage such as Image, Gif, Animations,
Fail refers to those guidelines that have been followed, but
wrongly implemented; Not detected refers to those guide-
lines that should be followed, but not implemented; and

13
Universal Access in the Information Society

Logos and decorative images can improve the accessibility to select and proceed with action with a single tap. Icons are
of website images. generally either graphic files or single images. The infor-
mation is represented by icons in terms of directions that
[Pre-recorded/Live Audio]: In order to share information facilitate the navigation process. Fields are single text boxes
among users, pre-recorded or live audio content is fre- that accept input based on its type, including text, numbers,
quently attached to websites. All audio content, whether characters, links, etc. Labels are for the representation of
pre-recorded or live, like radio webcasts, must include information as simple as possible. All of these aspects are
a correct caption and descriptive description to ensure its crucial for providing a more organized representation of the
accessibility. information and for making website access more enjoyable
and comfortable. It is important to provide accurate infor-
[Pre-recorded/Live video]: Video is an effective and valu- mation and clearly define their function in order to make
able content of a website to represent information to users. these features accessible. In addition, the text of these fea-
Receiving information through video content is more tures must be aligned within 80 characters, not be justified,
beneficial than textual content, especially for people with and have proper font size.
disability. To make these resources more accessible, it is
important to define appropriate captions and descriptive [Text]: The webpage text content or textual content should
descriptions to describe any video content, whether it is live maintain an appropriate color and contrast ratio for different
or pre-recorded video, such as video conferencing or live text sizes, such as a normal text size of 4:5:1 ratio, or a large
speech. text size of 3:1 ratio (according to WCAG 2.2). Text on a
website should have the opportunity to be resized by up to
[Links]: Links are the supplementary resources of a web- 200% and to change the background and foreground colors,
page that aim to enhance or extend the knowledge of the web to increase its visual accessibility. To increase its authentic-
content. A webpage may have several internal and external ity, repeated and duplicate material must not be added to
links to extend its information. Links are not allowed to be the webpage. For content representation simplicity, line,
broken or unavailable, should be a maximum of 80 charac- paragraph, letter, and word spacing should be kept to 1.5, 2,
ters long, and not be justified to maintain accessibility. The 0.12, and 0.16, respectively. The CSS pixel width and height
purpose of the link should be properly stated to understand should be large enough (at least 320 and 256) to allow con-
its usefulness, should be distinguishable from the textual tent scrolling toward vertical and horizontal alignment.
content, and must be embedded in 1.5 font.
[Title]: The title is a very important feature to reflect the pur-
[Words]: Words are the normal text that is used to repre- pose of the webpage. An accurate, descriptive, and appro-
sent the content or information of a webpage. To improve priate title is essential to help people to grasp the objective
the accessibility of the content or represented information, of the webpage.
it should be meaningful, and simple to understand. All the
words in the content, images, fields, and menu list should be [Heading and labels]: The secondary essential elements or
understandable. qualities of the webpage that are most important for accu-
rately representing the information are the headings and
[Sentences]: Sentences are the sequential representation of labels. These enhance the semantic quality of the web con-
a group of words used to represent the actual meaning or tent. A descriptive, meaningful, and appropriate description
idea. For webpage content, meaningful and simple sentences of headings and labels can improve the accessibility of the
are essential to improve its readability and accessibility. webpage content.

[Paragraph]: A paragraph is the combination of multiple [Language]: Language is the unified aspect or feature of a
sentences to represent complete information of web content webpage to make it globally accessible to the community.
to the user. It is a descriptive explanation of the informa- All webpages must be available in an English language
tion that is called an extended form of multiple sentences. page with its native language choice. Besides, the multiple
A paragraph should be meaningful, simple, precise, and language selection option is an important factor in making
contain useful but sufficient information that will help to the website accessible to users. Moreover, using different
understand the content. languages on the same webpage makes the content robust
to grasp and retain its consistency. Thus, keeping a single
[Button, icons, fields, Label]: Button, Icons, Fields, and language for all sections and paragraphs of the content
Labels are user interface components. Buttons enable users

13
Universal Access in the Information Society

is another crucial feature of accessibility that must be the expected input data reduce the accessibility of the con-
maintained. tent. Therefore, clear instructions about input data should be
defined properly such as checking the box, selecting single/
[Idioms]: Idioms are related to complex and unusual words multiple objects, etc.
that make the content difficult for the user to understand.
Idioms should be avoided in the webpage content. [Search field]: A Search field or search box is mostly used
by people with disability who have issues with content
[Jargons]: Jargon is a complex pattern that makes the con- navigation or looking for certain information on a webpage.
tent difficult to interpret especially for people with disabili- If a webpage lacks organized content, an effective search
ties which reduces the accessibility of the content. Jargon field can help to improve the navigation. The search field
should be avoided in webpage content. or search box must have an understandable name, a simple
design, and appear exactly with the same name and the same
[Abbreviation]: Abbreviations are the shorter form of words way on every page.
or phrases, such as IT, which stands for Information Tech-
nology. Although abbreviations might be helpful in certain [Form]: Forms are important tools on websites for commu-
situations, for accessibility, it is not an appropriate decision nicating with users for a variety of purposes, including data
to use such short forms as people with disabilities have sev- collection. In order to understand and clarify the instruc-
eral issues with the short form of words or phrases. How- tions regarding the expected input from the user, a web form
ever, in unavoidable situations, it is necessary to provide a needs to have a textual format description.
broad and expanded form of the used abbreviation to under-
stand its meaning. [Error]: If necessary, the error function displays an error
message to inform the user about error reason or recom-
[Pronunciation]: Pronunciation is the process of under- mended user action. The defined error message must be
standing words or sentences without facing any difficulty. appropriate and fully represent the instruction to make the
To improve pronunciation ability, meaningful words, and error generation accessible.
sentences are important. Complex and ambiguous words
should be avoided in the text content of webpages. [Word length]: Word length indicates how long a word
should be. Long words are difficult to pronounce and under-
[Reading level]: Reading level refers to the ability to read stand by people with disabilities, thus all words must be
the content by the user without any difficulty, especially shortened as much as possible to make the information
for people with disabilities. Maintaining reading levels is accessible to the user. Besides through collecting user opin-
important to improve the accessibility of web content. ion or user requirements, it may possible to identify the
appropriate length of words that could be helpful for the
[Context-sensitive content]: Several complex words, ques- web developer and researcher.
tions, patterns, and sentences are referred to as context-sen-
sitive content that reduces the accessibility of web content. [Sentence length]: Proper sentence length is an important
For these, a proper and detailed explanation could reduce aspect of making the content accessible and understandable
the difficulty of the content understanding. for people with disabilities. A long sentence may not be ben-
eficial to the user for effectively representing the content.
[Drop-down menu, dialog box, checkbox, combo
box]: Drop-down menus provide a list of objects or items [Paragraph length]: A long paragraph can make the con-
to interact with the menu through clicking or cursor hover- tent monotonous and difficult to understand and remember
ing. A dialog box is a type of pop-up window that is used to by people with disabilities. An effective and flexible para-
display informational messages including alerts, prompts, graph limit might make the content accessible to users with
and confirmations. Missing or irrelevant information in dia- disabilities.
log boxes and drop-down menus may make the website less
accessible. Such missing and inappropriate information is [Text content length]: When referring to content length, all
not allowed in these elements. A checkbox is a square box words, phrases, and paragraphs are included. Long textual
that allows it to be ticked or checked for any active action. content forms difficulty in terms of accessing, searching,
Combo boxes make it possible to choose an item from a
long list of options, making it easier for the user to locate
the desired item. Sometimes, difficulties in understanding

13
Universal Access in the Information Society

and understanding especially for people with intellectual or status. According to WCAG 2.2, 12 success criteria are
cognitive disabilities. associated with non-text elements that validate the acces-
sibility of web content. A list of website features related to
[User information]: In some situations, websites require non-text aspects that we incorporate during our automated
user personal information such as user name, email address, web accessibility testing follows next. Also, web content
password, location, a particular interest, etc. These make the accessibility guidelines (WCAG 2.0) and some other rules
website inaccessible as users may not agree to share their or directions about the additional elements are given in
information and sometimes users with disabilities don’t Table 10 (Appendix).
understand the actual meaning of these requirements.

[CAPTCHA]: Websites with CAPTCHA (“Completely Auto- [Field, Button, Link]: In terms of accessibility, fields, but-
mated Public Turing test to tell Computers and Humans tons, and links must have accessible color concerning peo-
Apart”) have been very common in recent times for secu- ple with disabilities such as visual impairments including
rity purposes or user behavior understanding. Some users partially blind and color-blind users.
prefer text-based CAPTCHA, while others find image-
based CAPTCHA to be more useful. However, few users [Image, Logo]: Images and logos are important visual
have issues with CAPTCHA as it requires careful attention. aspects or user interface elements on websites. People with
They fail to provide the right response, resulting in repeated visual and cognitive disabilities may have difficulty access-
attempts or refusals to browse. It frustrates users and reduces ing these elements if the proper contrast ratio is not main-
their interaction with that specific website. tained. To improve accessibility of these components, the
logo should retain a contrast ratio of 7:1, while the suitable
[Font size]: As a default font size of webpage content, font contrast ratio for picture content is aligned with a 4:5:1 ratio
size must be suitable for every group of people with dis- (according to WCAG 2.2).
ability to ensure content accessibility. Mostly, font size 12
is known as an accessible font size for web content, but not [Text in Images]: Text in image refers to the textual con-
for every type of disability such as people with severe vision tent of an image that represents the inherent information
impairment. Thus, manual font size adjustment is important of images. To make this textual content accessible, the
to ensure content accessibility to every group of people with acceptable contrast ratio for normal text should be 7:1, and
disability. the acceptable contrast ratio for large text should be 4:5:1
(according to WCAG).
[Font family]: As a default font family of webpage content,
an appropriate font family is important to make the con- [Maps, images, Diagram, Data tables, presentation,
tent accessible for every type of disability. Sans serif is the video]: To improve visual accessibility, appropriate width,
most frequently and most widely used font family, though it and height specification is important for visual content such
depends on the users’ requirements. as Maps, Images, Diagrams, Data tables, Presentations, and
video. According to accessibility guidelines, to make these
[Complexity of text pattern]: Text pattern refers to text rep- elements widely accessible for everyone, the required width
resentation style such as italic, bold, etc. Inappropriate text and height of the content should be 320 and 256 CSS pixels,
patterns can make the content difficult to understand. For respectively.
some users the ‘italic’ text pattern is confusing. Therefore,
the use of appropriate text patterns may reduce the complex- [Input size]: Input size specifies the length of the content
ity of the content text and improve its accessibility. that is required as input to the input field. In some cases,
input size may make content less accessible. For example, a
[Website content type]: Content type represents whether a short or long input size may dissatisfy the user. To minimize
website should contain solely text, images, or video content. such issues, the ideal input size is 24 CSS pixels (minimum)
A proper content type is important as it helps to represent AND 44 CSS pixels (maximum).
content and make it more interesting and user-friendly.
[Markup language elements]: To retain the webpage struc-
ture and content accessible to everyone, all markup lan-
3.2.4.2 Non-text complexity A non-text complexity algo- guage elements for HTML or CSS such as bold tag < b></
rithm could be a potential way for the evaluation of non-tex- b>, heading < h></h>, paragraph < p></p > should define
tual elements on webpages to determine their accessibility

13
Universal Access in the Information Society

with start and end tag, and specify the role/property of the these are useful for sharing content, it sometimes makes it
elements. difficult to understand the content objective for people with
disabilities. In certain situations, the uses of an excessive
[Loading time]: Webpage loading time is the average time number of images cause serious inaccessibility of the con-
that a webpage takes to appear on the user’s screen after tent. Thus, a limited number of images should be used on a
searching or browsing through their web address on the particular webpage.
search panel. Loading time is important to improve acces-
sibility because if a page has some issues and takes longer [Number of video content]: Sharing information through
than usual to load, user dissatisfaction will arise which may video content is effective for people without disabilities, but
cause the exclusion of that webpage interaction. people with impairments may not find it helpful. Users with
disabilities may find it difficult to understand the informa-
[Page length]: Webpage length refers to its display size and tion that is represented within a few minutes. So, the use of
content length or navigation time. When a webpage is too video content should be limited on the webpage.
long, it can be extremely difficult for people with disabilities
to use it because they may have mobility problems or cogni- [Accessible color pair]: A set of complementary colors can
tive challenges that make it ineffective for them to browse make colors more accessible. For example, for displaying
for a long period. So sufficient material should be correctly a webpage banner, a combination of background color and
organized within a conventional page limit. text color may make the content difficult to read or under-
stand. Inappropriate color combinations render the web-
[Website availability]: Website availability also referred to site inaccessible, especially for people with partial visual
as website uptime, is to guarantee that users can browse or impairment or colour blindness.
access the page whenever they want. In case of failure of
website availability, it may reduce the effectiveness of the
webpage and users may be less likely to visit that site regu- 3.2.5 Aspects of accessibility issues and score visualization
larly. Therefore, maintaining website availability is a criti-
cal prerequisite for enhancing accessibility. After performing the accessibility investigation, we orga-
nized/represented the evaluation result by considering sev-
[Manual text size/font adjustment]: Allowing manual text eral statistics. The major statistics concluded related to the
size/font size adjustment is one of the major factors in reduc- evaluated result in terms of each algorithm, accessibility
ing accessibility. As preferred text size or font size varies score for each disability type, overall accessibility score
from person to person according to their comfort, allowing with accessibility status, and arbitrary information of the
users to select several font sizes can enhance the usability evaluated webpages.
and accessibility of a website. Regarding the algorithmic evaluation, the first algorithm
(Non-Text Complexity Analysis) evaluates the accessibility
[Manual Color adjustment]: Similar to users manually concerns of the webpage by considering its non-text compo-
adjusting font size, offering several color adjustment options nents. It is able to assess 19 web objects in total, including
to users while navigating a website can increase user sat- their functionalities and other aspects. Similar to the first
isfaction and accessibility of a webpage. This criterion is algorithm, the second algorithm (Tex-Complexity Analysis)
especially important for people with color impairment. analyzes all of the webpage’s text components to determine
how complicated or problematic they are from an accessi-
[Number of internal and external links]: Internal and exter- bility standpoint. It is able to assess a total of 12 web objects
nal links (hyperlinks) are additional webpage resources that considering the textual components. The algorithmic
aim to provide more information to users. Unfortunately, evaluation results were arranged into six different aspects
most people with disabilities do not consider these addi- considering “Success Criteria” “Conformance Level”,
tional resources useful and they hardly use these resources. “Feedback”, “Result”, “Impairments Type” and “Improve-
Besides, these resources sometimes make users confused ment Direction”. We represent the algorithmic evaluation
about the actual content and additional content. Thus, a lim- result into six categories since in terms of accessibility sup-
ited number of internal and external links should be added port; most of the developed approaches [16, 22, 47] have
to the websites to avoid these uncertain experiences. no clear indication of the implemented techniques or suc-
cess criteria and their conformance level. Therefore, users
[Number of image content]: To represent webpage infor- are unable to distinguish between accessibility features that
mation, images are frequently used components. Though have been implemented and those that are not covered by

13
Universal Access in the Information Society

the developed approach. Also, without indicating the con- accessibility status has also been provided. Besides, we
formance level of each success criterion introduces dif- summarized some arbitrary information that helps to under-
ficulties in understanding the importance of a particular stand some basic information about the tested webpage such
success criterion. Thus, to increase the effectiveness of the as page URLs, page title, total number of checked HTML
evaluation report, we have also provided information on the elements, page size or length, and page loading time. These
conformance level with reference to the specific success cri- statistics help a wide array of people (i.e., end users, design-
teria. After that, we offered feedback regarding the evalu- ers, developers, practitioners, etc.) to understand the overall
ation status of each success criterion. Furthermore, in the accessibility status of the tested webpage.
context of result categorization, almost all the developed
tools categorize the accessibility guidelines in terms of sev-
eral terminologies such as passed, failed, cannot tell, known 4 Proposed automated accessibility testing
error, likely error, potential error, error, warning, success, Framework validation
and not applicable. These terminologies sometimes denote
an uncertain outcome. For example, failed, cannot tell, error, To validate the proposed framework, we performed a com-
warning and not applicable terminologies are considered a parative analysis with similar existing models considering
negative result. It represents that the guideline is either not several functional properties, such as updated guidelines,
fulfilled or not identified or difficult to identify. It can also user and expert criteria consideration, textual/non-textual
mean that the website has structural defects or programmatic component analysis, evaluated and not evaluation check-
errors in the evaluation tool. Also, for likely error terminol- points feedback, overall accessibility score computation
ogy, it is not clear whether it is an identified error or not. So, and accessibility score computation for different disability
from these uncertain categorizations, it is difficult to under- types. Regarding comparative assessment, we compared our
stand the concluded accessibility score that could lead to proposed framework with the existing six models that have
misleading accessibility representation. Therefore, we con- been mentioned in state-of-the-art literature in terms of sev-
cisely categorized all the terminologies or assessment cri- eral functionalities as mentioned in Table 9.
teria into PASSED, FAILED, NOT DETECTED, and NOT Table 9 depicts that for information regarding the updated
TESTED, referring to each evaluated success criteria to cal- version of accessibility guidelines, one model considered
culate the accessibility score and to appropriately evaluate the updated guideline in their model (Boyalakuntla et al.
the accessibility status. We also provide information on each [25]), and other models have no such concern. Concerning
type of impairment related to each success criterion. This user and expert requirements and suggestions, none of the
information indicates which group of individuals with spe- compared tools were found to address such concerns in their
cific needs these particular success criteria are important to evaluation process. Besides, for textual and non-textual
ensure that the web content is accessible to them. In the last, component analysis, none of the models were found with
as majority of the developed tools don’t indicate which suc- such concern in their evaluation process. For evaluated and
cess criteria can be implemented automatically and which not evaluated guideline information, one model (Pelzetter
require manual or expert investigation as it is not possible [22]) provides feedback about the evaluated and not evalu-
to incorporate all the success criteria in an automated man- ated checkpoints feedback. Even though other models did
ner. Therefore, we offered textual improvement directions not compute the total accessibility score, two models, (Pel-
that show which criteria the tool successfully validated and zetter [47] and Hilera et al. [1]) were found to be concerned
which criteria need further verification or expert testing. It is about the overall accessibility score into account. Lastly,
recommended to do additional verification for FAILED suc- concentrating on disability types in the computation of the
cess criteria and expert testing for NOT TESTED criteria. accessibility score, none of the models generate an acces-
Regarding the accessibility score for each disability type, sibility score for each type of disability. Out of all these
none of the selected existing tools provide the percentage of factors, the proposed framework takes into account every
accessibility for each disability type, such as visual impair- element addressed in Sect. 3 to enhance the accessibility
ment, cognitive impairment, etc. It may be challenging for assessment or evaluation outcome of a webpage. Therefore,
web practitioners to comprehend which types of disabilities our hypothesis is that the proposed framework, in compari-
were prioritized during the development of that particular son to other comparable models, can offer a comprehensive
website and which disability type needs to be prioritized for and up-to-date view of webpage accessibility.
future improvement. Providing accessibility percentages
for each type of disability can give a better understanding
of how accessible a certain website is for a specific group
of people. Furthermore, an overall accessibility score with

13
Universal Access in the Information Society

Table 9 Comparative assessment results considering functional properties with existing models
References Assessment Functional Properties
Heading level WCAG User require- Expert sug- Textual/ Evaluated and Overall Accessibil-
2.2 ments gestion non-textual not Evaluated Acces- ity Score
consideration consideration component Checkpoints sibility for Disabil-
analysis feedback Score ity type
Jens Pelzetter (2018) [47] ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✓(Yes) ✗ (No)
Jens Pelzetter, (2020) [22] ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✓(Yes) ✗ (No) ✗ (No)
Michailidou et al. (2021) [16] ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No)
Boyalakuntla et al. (2021) [25] ✓(Yes) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No)
Hilera et al. (2016) [1] ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✓(Yes) ✗ (No)
Ingavélez-Guerra et al. (2018) [27] ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✗ (No) ✓(Yes)
Proposed Model ✓(Yes) ✓(Yes) ✓(Yes) ✓(Yes) ✓(Yes) ✓(Yes) ✓(Yes)

5 Discussion such as accessibility for mobile applications or accessibil-


ity criteria for software only. Thus, finding experts who
Several recent analyses of website accessibility have found have adequate knowledge about the guidelines of the target
that the proportion of inaccessible websites is growing rap- domain (mobile/software/web platform) is challenging [49].
idly [7, 40], which negatively impacts people with disabili- Other issues related to promptly auditing or verifying the
ties in their access to digital resources. In recent years, this evaluated result could increase the cost of the entire evalua-
has drawn the attention of researchers interested in finding tion process [50]. For example, as the current web platform
ways to improve this problem so that persons with impair- is changing dynamically and to keep this dynamic domain
ments can gain better access options. Among several pos- accessible, proper auditing is important and should be con-
sible solutions, the most effective technique to demonstrate ducted routinely. Thus, organizing the hybrid testing itself is
the limitations of a developed website is by automatically time-consuming, and costly. According to the scenario and
reviewing the website to determine its accessibility status our observation, we agree with Keselj et al. [51] in empha-
in terms of several factors like accessibility score, acces- sizing the importance of automated testing for identifying
sibility concerns, etc. Therefore, in this research work, our the accessibility aspects of web platforms. Additionally,
prime aim is to focus on an automated accessibility testing with the help of an automated system, it is feasible to evalu-
framework in order to improve the effectiveness of acces- ate many websites in a short time with a minimal cost which
sibility testing. We aim to consider automated techniques is nearly impossible with hybrid testing.
as automated techniques are more effective than the hybrid However, in order to address the current limitations
approach when it comes to minimizing time and cost. For of automated accessibility testing tools, our findings are
example, in a hybrid approach, user and expert testing have aligned with appropriate guideline selection, user and expert
been incorporated where user testing involves a group of suggestion incorporation, guideline modelling, semantic
users accessing the website and testing accessibility crite- improvement, and proper accessibility result formulation.
ria from their knowledge and experience. Besides, expert All these aspects have been addressed in the proposed
testing involves experts or professionals from accessibility accessibility aspects framework, shown in Fig. 4. To imple-
domain who check manually all the conformance levels and ment this framework, adequate knowledge of programming,
guidelines. However, the user testing process requires some sufficient knowledge about Web Content Accessibil-
pre-evaluation setup which requires additional cost and time ity Guidelines (WCAG), the user and expert suggestions,
[48]. For example, user selection (in terms of several cri- and guideline simplification were required. Besides, to
teria like their ability, cognitive functionality, knowledge implement this framework, we face challenges with the
about accessibility criteria, available time, etc.); selection classification or distribution of the guidelines in terms of
of required assistive technology (if it is applicable), test- semantic aspects and non-semantic aspects. Therefore, we
ing tools or questionnaire preparation, setting up the testing simplified the guidelines that can be found in [52], which
environment (online or offline), prepare the guidelines or presented complete information with simplified informa-
instruction about how to conduct the test or how to evalu- tion of WCAG 2.2 addressing several aspects such as web
ate the website, etc. Also, in expert evaluation, to get the objects, guidelines, attributes, conformance level, require-
proper and acceptable outcome, potential expert selection ments, beneficial type (several disabilities), and evaluation
with adequate knowledge and experience is a difficult task phase (whether can be evaluate automated or require addi-
and sometimes they are not available for conducting the tional checking). Furthermore, implementing the algorithms
testing. Few experts might have domain-specific knowledge for semantic and non-semantic aspects separately required

13
Universal Access in the Information Society

adequate knowledge of web programming and natural lan- 7 Appendix


guage processing techniques. Also, determining the proper
visualization criteria focusing on every disability type and
ensurement of appropriate assessment terms was challeng- Table 10 Directions/guidelines/rules for each addressed element in the
ing while we incorporated this framework which required proposed framework
additional concern and expertise on accessibility perspec- Objects Related guidelines/Improvement directions
tive in detail. However, the proposed framework has some Images (1.1.1) Non-text Content; (1.4.5) Images of
Text [WCAG 2.2]
limitations, such as the proposed model only considers
Pre-recorded/Live (1.2.1) Audio-only and Video-only; (1.2.3)
WCAG that can be extended incorporating other guidelines. Audio Audio Description or Media Alternative (Prere-
Besides, the proposed model considers fewer criteria from corded); (1.2.4) Captions (Live) [WCAG 2.2]
2 experts and 5 user suggestions, which can be extended Pre-recorded/Live (1.2.1) Audio-only and Video-only; (1.2.2)
by considering other criteria by incorporating more user Video Captions (Prerecorded); (1.2.4) Captions (Live)
[WCAG 2.2]
and expert involvement. By addressing these limitations,
Links (1.3.6) Identify Purpose; (1.4.1) Use of Color;
the effectiveness of the proposed model can be improved (1.4.8) Visual Presentation; (2.4.4) Link Pur-
further. pose (In Context); (2.4.9) Link Purpose (Link
Only), (4.1.2) Name, Role [WCAG 2.2]
Words (1.3.2) Meaningful Sequence; (2.4.1) Bypass
Blocks; (3.1.3) Unusual Words [WCAG 2.2]
6 Conclusion
Sentences (1.3.2) Meaningful Sequence; (2.4.1) Bypass
Blocks; (3.1.3) Unusual Words [WCAG 2.2]
In this study, we looked at earlier research in the area of Paragraph (1.3.2) Meaningful Sequence; (1.4.1) Use of
developing accessibility evaluation tools to assess their Color; (2.4.1) Bypass Blocks; (3.1.3) Unusual
strengths, limitations, and weaknesses. According to the Words [WCAG 2.2]
observation of earlier research and by addressing the iden- Button, Icons, (1.3.3) Sensory Characteristics; (1.3.5) Identify
Fields, Label Input Purpose (placeholder); (1.3.6) Identify
tifying issues, we proposed an accessibility evaluation
Purpose (Buttons); (2.1.1) Keyboard; (2.1.4)
framework considering several aspects from the coding to Character Key Shortcuts; (3.3.2) Labels or
the visualization part which led web practitioners to under- Instructions [WCAG 2.2]
stand the accessibility evaluation process and how it could Text (1.3.2) Meaningful Sequence; (2.4.1) Bypass
facilitate to improvement of the accessibility evaluation Blocks; (3.1.3) Unusual Words [WCAG 2.2]
result. We structured the proposed accessibility framework Title (1.3.2) Meaningful Sequence; (1.4.12) Text
Spacing; (2.4.1) Bypass Blocks; (2.4.2) Page
according to several aspects: guideline selection, user and Titled [WCAG 2.2]
expert suggestion consideration, guideline visualization, Heading and (1.4.1) Use of Color; (2.4.6) Headings and
listing several website features that require special focus Labels Labels; (2.4.10) Section Headings [WCAG 2.2]
during tool development, and acceptable accessibility issue Language (3.1.1) Language of Page [WCAG 2.2]
identification and visualization process; these represent Idioms (3.1.3) Unusual Words [WCAG 2.2]
the effectiveness of the proposed approach to facilitate the Jargons (3.1.3) Unusual Words [WCAG 2.2]
evaluation process and improve its effectiveness. The pro- Pronunciation (3.1.6) Pronunciation [WCAG 2.2]
Reading level (3.1.5) Reading Level [WCAG 2.2]
posed framework has the potential to overcome the limita-
Context-sensitive (3.1.3) Unusual Words [WCAG 2.2]
tions of current accessibility evaluation tools. Additionally, content
none of the studies found in the literature addressed such Drop-down menu, (3.2.1) On Focus; (3.2.2) On Input [WCAG
aspects for web accessibility evaluation, which completely dialog box, check- 2.2]
represent the novelty of the proposed framework. Moreover, box, combo box
the proposed framework is part of a web accessibility test- Search field (2.4.5) Multiple Ways; (3.2.3) Consistent Navi-
gation; (3.2.4) Consistent Identification; (3.3.2)
ing software development project. As an initial effort, we Labels or Instructions [WCAG 2.2]
proposed the accessibility evaluation framework to address Form (3.2.5) Change on Request; (4.1.3) Status Mes-
the potential accessibility criteria that could help to facili- sages [WCAG 2.2]
tate the evaluation process. Our next step or future work Error (3.3.1) Error Identification; (3.3.3) Error Sug-
is aligned with the experimentation of the proposed frame- gestion [WCAG 2.2]
work in practical cases to evaluate and validate the acces- Word length (1.3.3) Sensory Characteristics; (1.3.4) Orienta-
tion [WCAG 2.2]
sibility of web pages.
Sentence length (1.3.3) Sensory Characteristics; (1.3.4) Orienta-
tion [WCAG 2.2]
Paragraph length (1.3.3) Sensory Characteristics; (1.3.4) Orienta-
tion [WCAG 2.2]

13
Universal Access in the Information Society

Table 10 Directions/guidelines/rules for each addressed element in the nal version. All authors have read and agreed to the published version
proposed framework of the manuscript.
Objects Related guidelines/Improvement directions
Funding The authors declare that this article is their research and was
Text content (1.3.3) Sensory Characteristics; (1.3.4) Orienta-
not financially supported.
length tion [WCAG 2.2]
Open access funding provided by University of Pannonia.
User Information No ‘Username’ and ‘Password’
CAPTCHA (1.1.1) Non-text Content [WCAG 2.2] Data availability No datasets were generated or analysed during the
Font size (1.4.12) Text Spacing [WCAG 2.2] current study.
Font family Accessible font-family are ‘Tahoma’ or ‘Cali-
bri’ or ‘Helvetica’ or ‘Arial’ or ‘Verdana’ or
‘Times New Roman’ Declarations
Complexity of No < b > or < strong > or < i > or < em > or
text pattern < mark > or < sub > or < sup > elements/patterns Competing interests The authors declare no competing interests.
Website content Webpage content type should be the combina-
type tion of paragraph, image and video content Open Access This article is licensed under a Creative Commons
Attribution 4.0 International License, which permits use, sharing,
Field, Button, (1.3.1) Info and Relationships; (1.4.8) Visual
adaptation, distribution and reproduction in any medium or format,
Link Presentation; (2.4.5) Multiple Ways; (3.3.2)
as long as you give appropriate credit to the original author(s) and the
Labels or Instructions; (4.1.2) Name, Role,
source, provide a link to the Creative Commons licence, and indicate
Value [WCAG 2.2]
if changes were made. The images or other third party material in this
Image, Logo (1.3.3) Sensory Characteristics [WCAG 2.2] article are included in the article’s Creative Commons licence, unless
Text in Images (1.3.3) Sensory Characteristics [WCAG 2.2] indicated otherwise in a credit line to the material. If material is not
Maps, Images, (1.4.8) Visual Presentation; (1.4.10) Reflow included in the article’s Creative Commons licence and your intended
Diagram, Data [WCAG 2.2] use is not permitted by statutory regulation or exceeds the permitted
tables, Presenta- use, you will need to obtain permission directly from the copyright
tion, video holder. To view a copy of this licence, visit http://creativecommons.
Input size (1.3.3) Sensory Characteristics; (1.3.5) Identify org/licenses/by/4.0/.
Input Purpose [WCAG 2.2]
Markup language (1.3.6) Identify Purpose [WCAG 2.2]
elements
References
Loading time Webpage loading time should be < = 0.3 s
Page length Webpage length should be < = 14 KB 1. Hilera, J.R., Otón, S., Martin-Amor, C., Timbi-Sisalima, C.:
Website Webpage must be available or not down Towards a service-based architecture for web accessibility fed-
availability erated evaluation. In 9th International Conference on Advances
Manual text size/ Webpage should have manual text and font size in Computer-Human Interactions (ACHI’16) (pp. 6–10). (2016),
font adjustment adjustment option April
Manual Color Webpage should have manual color adjustment 2. Giraud, S., Thérouanne, P., Steiner, D.D.: Web accessibility: Fil-
adjustment option tering redundant and irrelevant information improves website
Number of inter- Webpage number of internal and external or usability for blind users. Int. J. Hum. Comput. Stud. 111, 23–35
nal and external hyperlinks must be < = 50 (2018)
links 3. Acosta-Vargas, P., Salvador-Ullauri, L.A., Luján-Mora, S.: A
heuristic method to evaluate web accessibility for users with low
Number of image Webpage images number must be < = 10
vision. IEEE Access, 7, 125634–125648. (2019)
content
4. Li, L., Wang, C., Song, S., Yu, Z., Zhou, F., Bu, J.: A task assign-
Number of video Webpage number of audios should be > = 1 to ment strategy for crowdsourcing-based web accessibility evalua-
content < = 2 and videos should be > = 1 to < = 2 tion system. In Proceedings of the 14th International Web for All
Accessible color Webpage color pair should be accessible Conference (pp. 1–4). (2017), April
pair except [‘red’, ‘green’] and [‘red’, ‘black’] pair/ 5. Ismail, A., Kuppusamy, K.S.: Web accessibility investigation and
combination identification of major issues of higher education websites with
statistical measures: A case study of college websites. J. King
Acknowledgements The authors would like to thank all the authors Saud University-Computer Inform. Sci. (2019)
for their contributions. 6. Rashida, M., Islam, K., Kayes, A.S.M., Hammoudeh, M., Arefin,
M.S., Habib, M.A.: Towards developing a framework to analyze
Author contributions Jinat Ara and Cecilia Sik-Lanyi conceptualized the qualities of the university websites. Computers. 10(5), 57
the study. Jinat Ara undertook the investigation, conducted the past (2021)
literature analyses, managed the data, and developed the proposed 7. Ara, J., Sik-Lanyi, C.: Investigation of COVID-19 Vaccine Infor-
framework with support from Cecilia Sik-Lanyi, and wrote the first mation websites across Europe and Asia using automated acces-
draft version of the paper. Cecilia Sik-Lanyi and Arpad Kelemen re- sibility protocols. Int. J. Environ. Res. Public Health. 19(5), 2867
viewed the first draft version of the paper and provided their valuable (2022)
direction and suggestions. According to their suggestions, Jinat Ara 8. Kim, H.K., Park, J.: Examination of the Protection offered by cur-
prepared the final version of the paper. Finally, Arpad Kelemen and rent accessibility acts and guidelines to people with disabilities
Tibor Guzsvinecz reviewed the final version of the paper. All authors in using Information Technology devices. Electronics. 9(5), 742
critically inputted their contributions, reviewed, and agreed on the fi- (2020). https://doi.org/10.3390/electronics9050742

13
Universal Access in the Information Society

9. Manca, M., Palumbo, V., Paternò, F., Santoro, C.: The Transpar- Argentino de Ciencias de la Informática y Desarrollos de Inves-
ency of Automatic Web Accessibility Evaluation Tools: Design tigación (CACIDI) (pp. 1–5). IEEE. (2018), November
Criteria, State of the Art, and User Perception. ACM Transactions 28. Song, S., Bu, J., Artmeier, A., Shi, K., Wang, Y., Yu, Z., Wang, C.:
on Accessible Computing (2022) Crowdsourcing-based Web accessibility evaluation with golden
10. Mazalu, R., Cechich, A.: Web accessibility Assessment through maximum likelihood inference. Proceedings of the ACM on
Multi-agent Support for visually impaired users. Int. J. Coop. Human-Computer Interaction, 2 (CSCW), 1–21. (2018)
Inform. Syst. 29, 03 (2020) 29. Li, L., Bu, J., Wang, C., Yu, Z., Wang, W., Wu, Y., Zhou, Q.:
11. Alsaeedi, A.: Comparing web accessibility evaluation tools and CrowdAE: A crowdsourcing system with human inspection
evaluating the accessibility of webpages: Proposed frameworks. quality enhancement for web accessibility evaluation. In: Inter-
Information. 11(1), 40 (2020) national Conference on Computers Helping People with Special
12. Schiavone, A.G., Paternò, F.: An extensible environment for Needs (pp. 27–30). Springer, Cham
guideline-based accessibility evaluation of dynamic web applica- 30. Alahmadi, T.: A multi-method evaluation of university website
tions. Univ. Access Inf. Soc. 14(1), 111–132 (2015) accessibility: Foregrounding user-centred design, mining source
13. Martins, J., Gonçalves, R., Branco, F.: A full scope web acces- code and using a quantitative metric. In Proceedings of the 14th
sibility evaluation procedure proposal based on Iberian eHealth International Web for All Conference (pp. 1–2). (2017), April
accessibility compliance. Comput. Hum. Behav. 73, 676–684 31. Song, S., Bu, J., Wang, Y., Yu, Z., Artmeier, A., Dai, L., Wang, C.:
(2017) Web accessibility evaluation in a crowdsourcing-based system
14. Lee-Geiller, S., Lee, T.D.: Using government websites to enhance with expertise-based decision strategy. In Proceedings of the 15th
democratic E-governance: A conceptual model for evaluation. International Web for All Conference (pp. 1–4). (2018), April
Government Inform. Q. 36(2), 208–225 (2019) 32. Hambley, A.: Empirical web accessibility evaluation for blind
15. Bonacin, R., Reis, J.C.D., de Araujo, R.J.: An ontology-based web users. ACM SIGACCESS Accessibility Comput. 129, 1–5
framework for improving color vision deficiency accessibility. (2021)
Universal Access in the Information Society, 1–26. (2021) 33. Kous, K., Kuhar, S., Pavlinek, M., Heričko, M., Pušnik, M.: Web
16. Michailidou, E., Eraslan, S., Yesilada, Y., Harper, S.: Automated accessibility investigation of Slovenian municipalities’ websites
prediction of visual complexity of web pages: Tools and evalua- before and after the adoption of European Standard EN 301 549.
tions. Int. J. Hum. Comput. Stud. 145, 102523 (2021) Universal Access in the Information Society, 20 (3), 595–615.
17. Robal, T., Marenkov, J., Kalja, A.: Ontology design for automatic (2021)
evaluation of web user interface usability. In 2017 Portland Inter- 34. Hassan, A., El Abd, R.: July). Social Networks website acces-
national Conference on Management of Engineering and Tech- sibility for people with visual disability: An experimental study.
nology (PICMET) (pp. 1–8). IEEE. (2017), July JRL Fac. Commer. Sci. Res. 54, 2 (2017)
18. Kaur, N., Kumar, V.: Framework for Covering the Limitations of 35. Ara, J., Sik-Lanyi, C., Kelemen, A.: An Integrated Variable-
Web Accessibility Improvement Tools. FP-International Journal Magnitude Approach for Accessibility Evaluation of Healthcare
of Computer Science Research (IJCSR), Volume 2, Issue 1, Pages Institute Web Pages. Appl. Sci. 13(2), 932 (2023)
27–31, March (2015) 36. Rodrigues, S.S., Scuracchio, P.E., de Fortes, M.: R. P. A support
19. Duarte, C., Matos, I., Carriço, L.: Semantic content analysis sup- to evaluate web accessibility and usability issues for older adults.
porting web accessibility evaluation. In Proceedings of the 15th In Proceedings of the 8th International Conference on Software
International Web for All Conference (pp. 1–4). (2018), April Development and Technologies for Enhancing Accessibility and
20. Ojha, P.K., Ismail, A., Srinivasan, K.K.: Perusal of readability Fighting Info-exclusion (pp. 97–103). (2018), June
with focus on web content understandability. J. King Saud Uni- 37. Ara, J., Sik-Lanyi, C., Kelemen, A.: Accessibility engineering in
versity-Computer Inform. Sci. 33(1), 1–10 (2021) web evaluation process: a systematic literature review. Universal
21. Broccia, G., Manca, M., Paternò, F., Pulina, F.: Flexible auto- Access in the Information Society, (pp. 1–34). (2023)
matic support for web accessibility validation. Proc. ACM Hum 38. Eraslan, S., Yaneva, V., Yesilada, Y., Harper, S.: Web users with
Comput Interact. 4(EICS), 1–24 (2020) autism: Eye tracking evidence for differences. Behav. Inform.
22. Pelzetter, J.: A declarative model for accessibility requirements. Technol. 38(7), 678–700 (2019)
In Proceedings of the 17th International Web for All Conference 39. Ara, J., Hasan, M.T., Omar, A., A., Bhuiyan, H.: Understanding
(pp. 1–10). (2020), April customer sentiment: Lexical analysis of restaurant reviews. In
23. Moher, D.: The PRISMA 2020 statement: An updated guideline 2020 IEEE Region 10 Symposium (TENSYMP) (pp. 295–299).
for reporting systematic reviews. Syst. Reviews. 10(1), 1–11 IEEE. (2020), June
(2021) 40. Karaim, N.A., Inal, Y.: Usability and accessibility evaluation of
24. Mohamad, Y., Velasco, C.A., Kaklanis, N., Tzovaras, D., Paternò, Libyan government websites. Univ. Access Inf. Soc. 18(1), 207–
F.: A holistic decision support environment for web accessibility. 216 (2019)
In International Conference on Computers Helping People with 41. Leporini, B., Buzzi, M., Hersh, M.: Video conferencing tools:
Special Needs (pp. 3–7). Springer, Cham. (2018), July Comparative study of the experiences of screen reader users
25. Boyalakuntla, K., Venigalla, A.S.M., Chimalakonda, S.: WAc- and the development of more inclusive design guidelines. ACM
cess–A Web Accessibility Tool based on the latest WCAG 2.2 Trans. Accessible Comput. 16(1), 1–36 (2023)
guidelines. arXiv preprint arXiv:2107.06799. (2021) 42. Text simplification guidelines: Inclusive Design Research Cen-
26. Shrestha, R.: A Neural Network Model and Framework for an tre, OCAD University, Toronto, Ontario, Canada (1998).https://
Automatic Evaluation of Image Descriptions based on NCAM snow.idrc.ocadu.ca/accessible-media-and-documents/
Image Accessibility Guidelines. In 2021 4th Artificial Intel- text-simplification-guidlines/
ligence and Cloud Computing Conference (pp. 68–73). (2021), 43. Gaggi, O., Pederiva, V.: WCAG4All, a tool for making web
December accessibility rules accessible. In 2021 IEEE 18th Annual Con-
27. Ingavelez-Guerra, P., Robles-Bykbaev, V., Oton, S., Vera-Rea, P., sumer Communications & Networking Conference (CCNC) (pp.
Galan-Men, J., Ulloa-Amaya, M., Hilera, J.R.: A proposal based 1–6). IEEE. (2021), January
on knowledge modeling and ontologies to support the accessi- 44. Understanding the W3C’s Accessibility Conformance Test-
bility evaluation process of learning objects. In 2018 Congreso ing (ACT), Requirements: Bureau of Internet Accessibil-
ity, East Greenwich, US,(2023).https://www.boia.org/blog/

13
Universal Access in the Information Society

understanding-the-w3cs-accessibility-conformance-testing-act- Conference of the ACM Greek SIGCHI Chapter (pp. 1–6). (2023),
requirements September
45. Mens, K., Kellens, A.: Mining Source Code for Design Regulari- 50. Shelly, C.C., Barta, M.: Application of traditional software testing
ties. Working Session on Industrial Realities of Program Compre- methodologies to web accessibility. In Proceedings of the 2010
hension (IRPC 2008). (2008) International Cross Disciplinary Conference on Web Accessibil-
46. Sharma, A., Revathi, M.: Automated API testing. In 2018 3rd ity (W4A) (pp. 1–4). (2010), April
International Conference on Inventive Computation Technologies 51. Keselj, A., Milicevic, M., Zubrinic, K., Car, Z.: The Application
(ICICT) (pp. 788–791). IEEE. (2018), November of Deep Learning for the Evaluation of User Interfaces. Sensors,
47. Pelzetter, J.: Using Ontologies as a Foundation for Web Acces- 22(23), 9336. (2022)
sibility Tools. In Proceedings of the 15th International Web for 52. Ara, J., Sik-Lanyi, C.: AccGuideLiner: Towards a Modelling
All Conference (pp. 1–2). (2018), April Approach of Web Accessibility Requirements following WCAG
48. Baazeem, I.S., Al-Khalifa, H.S.: Advancements in web accessi- 2.2. In 2023 IEEE International Conference on Smart Informa-
bility evaluation methods: how far are we? In Proceedings of the tion Systems and Technologies (SIST) (pp. 10–15). IEEE. (2023),
17th International Conference on Information Integration and May
Web-based Applications & Services (pp. 1–5). (2015), December
49. Tsaktsiras, K., Katsanos, C.: ESALP 2.0: Educational System to Publisher’s note Springer Nature remains neutral with regard to juris-
Support Learning of Web Content Accessibility Guidelines by dictional claims in published maps and institutional affiliations.
Greek Web Practitioners. In Proceedings of the 2nd International

13

You might also like