[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (890)

Search Parameters:
Keywords = crowd-sourcing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 827 KiB  
Article
Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development
by Tahir Rashid, Inam Illahi, Qasim Umer, Muhammad Arfan Jaffar, Waheed Yousuf Ramay and Hanadi Hakami
Computers 2024, 13(10), 266; https://doi.org/10.3390/computers13100266 - 12 Oct 2024
Viewed by 213
Abstract
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task [...] Read more.
Crowdsourcing Software Development (CSD) platforms, i.e., TopCoder, function as intermediaries connecting clients with developers. Despite employing systematic methodologies, these platforms frequently encounter high task abandonment rates, with approximately 19% of projects failing to meet satisfactory outcomes. Although existing research has focused on task scheduling, developer recommendations, and reward mechanisms, there has been insufficient attention to the support of platform moderators, or copilots, who are essential to project success. A critical responsibility of copilots is estimating project duration; however, manual predictions often lead to inconsistencies and delays. This paper introduces an innovative machine learning approach designed to automate the prediction of project duration on CSD platforms. Utilizing historical data from TopCoder, the proposed method extracts pertinent project attributes and preprocesses textual data through Natural Language Processing (NLP). Bidirectional Encoder Representations from Transformers (BERT) are employed to convert textual information into vectors, which are then analyzed using various machine learning algorithms. Zero-shot learning algorithms exhibit superior performance, with an average accuracy of 92.76%, precision of 92.76%, recall of 99.33%, and an f-measure of 95.93%. The implementation of the proposed automated duration prediction model is crucial for enhancing the success rate of crowdsourcing projects, optimizing resource allocation, managing budgets effectively, and improving stakeholder satisfaction. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

Figure 1
<p>Success and failure distribution of CCSD projects.</p>
Full article ">Figure 2
<p>Proposed model.</p>
Full article ">Figure 3
<p>Spider Graph: Performance comparison against alternative approaches.</p>
Full article ">Figure 4
<p>Ridge Graph: Comparison against different machine learning algorithms.</p>
Full article ">
19 pages, 1581 KiB  
Review
Integrated People and Freight Transportation: A Literature Review
by Onur Derse and Tom Van Woensel
Future Transp. 2024, 4(4), 1142-1160; https://doi.org/10.3390/futuretransp4040055 - 8 Oct 2024
Viewed by 462
Abstract
Increasing environmental and economic pressures have led to numerous innovations in the logistics sector, including integrated people and freight transport (IPFT). Despite growing attention from practitioners and researchers, IPFT lacks extensive research coverage. This study aims to bridge this gap by presenting a [...] Read more.
Increasing environmental and economic pressures have led to numerous innovations in the logistics sector, including integrated people and freight transport (IPFT). Despite growing attention from practitioners and researchers, IPFT lacks extensive research coverage. This study aims to bridge this gap by presenting a general framework and making several key contributions. It identifies, researches, and explains relevant terminologies, such as cargo hitching, freight on transit (FoT), urban co-modality, crowd-shipping (CS), occasional drivers (OD), crowdsourced delivery among friends, and share-a-ride, illustrating the interaction of IPFT with different systems like the sharing economy and co-modality. Furthermore, it classifies IPFT-related studies at strategic, tactical, and operational decision levels, detailing those that address uncertainty. The study also analyzes the opportunities and challenges associated with IPFT, highlighting social, economic, and environmental benefits and examining challenges from a PESTEL (political, economic, social, technological, environmental, and legal) perspective. Additionally, it discusses practical applications of IPFT and offers recommendations for future research and development, aiming to guide practitioners and researchers in addressing existing challenges and leveraging opportunities. This comprehensive framework aims to significantly advance the understanding and implementation of IPFT in the logistics sector. Full article
Show Figures

Figure 1

Figure 1
<p>General Structure of the IPFT.</p>
Full article ">Figure 2
<p>Terminology Structure for IPFT.</p>
Full article ">Figure 3
<p>The General Framework.</p>
Full article ">
18 pages, 3143 KiB  
Article
Estimating Rainfall Intensity Using an Image-Based Convolutional Neural Network Inversion Technique for Potential Crowdsourcing Applications in Urban Areas
by Youssef Shalaby, Mohammed I. I. Alkhatib, Amin Talei, Tak Kwin Chang, Ming Fai Chow and Valentijn R. N. Pauwels
Big Data Cogn. Comput. 2024, 8(10), 126; https://doi.org/10.3390/bdcc8100126 - 29 Sep 2024
Viewed by 494
Abstract
High-quality rainfall data are essential in many water management problems, including stormwater management, water resources management, and more. Due to the high spatial–temporal variations, rainfall measurement could be challenging and costly, especially in urban areas. This could be even more challenging in tropical [...] Read more.
High-quality rainfall data are essential in many water management problems, including stormwater management, water resources management, and more. Due to the high spatial–temporal variations, rainfall measurement could be challenging and costly, especially in urban areas. This could be even more challenging in tropical regions with their typical short-duration and high-intensity rainfall events, as some of the undeveloped or developing countries in those regions lack a dense rain gauge network and have limited resources to use radar and satellite readings. Thus, exploring alternative rainfall estimation methods could be helpful to back up some shortcomings. Recently, a few studies have examined the utilisation of citizen science methods to collect rainfall data as a complement to the existing rain gauge networks. However, these attempts are in the early stages, and limited works have been published on improving the quality of such data. Therefore, this study focuses on image-based rainfall estimation with potential usage in citizen science. For this, a novel convolutional neural network (CNN) model is developed to predict rainfall intensity by processing the images captured by citizens (e.g., by smartphones or security cameras) in an urban area. The developed model is merely a complementary sensing tool (e.g., better spatial coverage) to the existing rain gauge network in an urban area and is not meant to replace it. This study also presents one of the most extensive datasets of rain image data ever published in the literature. The estimated rainfall data by the proposed CNN model of this study using images captured by surveillance cameras and smartphone cameras are compared with observed rainfall by a weather station and exhibit strong R2 values of 0.955 and 0.840, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Locations of rainfall images captured using surveillance cameras and smartphones during rain events near the Monash University campus in Malaysia.</p>
Full article ">Figure 2
<p>Sample images captured by smartphone on campus during rainfall events.</p>
Full article ">Figure 3
<p>Schematic representation of the CNN architecture in this study.</p>
Full article ">Figure 4
<p>The regression CNN workflow.</p>
Full article ">Figure 5
<p>Schematic structure of the CNN model importing image data, thresholding, partitioning, training, and testing (deep learning) for rainfall intensity prediction.</p>
Full article ">Figure 6
<p>Rainfall intensity distribution corresponding to (<b>a</b>) surveillance camera and (<b>b</b>) smartphone camera images.</p>
Full article ">Figure 7
<p>(<b>a</b>) displays the raw rain image, (<b>b</b>) is a sharpened image of the raw image input, (<b>c</b>) is a greyscale image that shows pixel intensity, and (<b>d</b>) displays the outcome of applying Otsu’s thresholding method. (<b>e</b>) combines the thresholding approach with image processing to merge two images.</p>
Full article ">Figure 8
<p>Samples of pre-processed images using Otsu’s method under different rainfall conditions: (<b>a</b>) no or low rain, (<b>b</b>) moderate rain, and (<b>c</b>) heavy rain.</p>
Full article ">Figure 9
<p>Observed vs. predicted rainfall intensity by CNN Model 4 using rainfall images captured by a surveillance camera. The blue dot line shows the fitted line corresponding to the R<sup>2</sup>.</p>
Full article ">Figure 10
<p>Observed vs. simulated rainfall intensities by CNN Model 4 using Approach 2 on the smartphone testing dataset.</p>
Full article ">
24 pages, 32566 KiB  
Article
A Study on the Influencing Factors of the Vitality of Street Corner Spaces in Historic Districts: The Case of Shanghai Bund Historic District
by Zehua Wen, Jiantong Zhao and Mingze Li
Buildings 2024, 14(9), 2947; https://doi.org/10.3390/buildings14092947 - 18 Sep 2024
Viewed by 473
Abstract
The revitalization of historic districts is crucial for the sustainable development of cities, with street corner spaces being a vital component of the public space in these districts. However, street corner spaces have been largely overlooked in previous research on crowd dynamics within [...] Read more.
The revitalization of historic districts is crucial for the sustainable development of cities, with street corner spaces being a vital component of the public space in these districts. However, street corner spaces have been largely overlooked in previous research on crowd dynamics within historic districts. This study investigates the key factors influencing crowd dynamics in street corner spaces within historic districts. First, a hierarchical model of vitality-influencing factors was developed based on prior research. Potential factors influencing the vitality of street corners were quantified using multi-source data collection methods, including deep learning algorithms, and crowd vitality within these spaces was assessed through multidimensional measurements. The impact of each element on crowd vitality was then analyzed through a multivariate linear regression model. The findings revealed that eight factors—corner building historicity, first-floor functional communality, transparency, openness, density of functional facilities, greenness, functional variety of buildings, and walkability—significantly influence the vitality of corner spaces, collectively explaining 77.5% of the vitality of these spaces. These conclusions offer new perspectives and scientific evidence for the revitalization and conservation of historic districts. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

Figure 1
<p>Schematic of the street corner study area (an example within the Bund district of Shanghai).</p>
Full article ">Figure 2
<p>Hierarchical model of factors influencing the vitality of street corners in historic districts.</p>
Full article ">Figure 3
<p>Research framework.</p>
Full article ">Figure 4
<p>Location map of Shanghai Bund Historic District.</p>
Full article ">Figure 5
<p>Map of the Bund in Shanghai in 1885 with a schematic representation of the study area (Source from “A Short History of Shanghai” by H. Pott, redrawn by the author).</p>
Full article ">Figure 6
<p>Historic building grading and conservation plan for the Bund Historic District (Source from Xu, Li-Xun [<a href="#B44-buildings-14-02947" class="html-bibr">44</a>] Redrawn by the author).</p>
Full article ">Figure 7
<p>Schematic diagram of the selection of the study sample within the study area. (The numbers represent the identification numbers for each street corner sample).</p>
Full article ">Figure 8
<p>Deep learning street corner image processing framework diagram.</p>
Full article ">Figure 9
<p>Results of deep learning algorithm processing of some sample street corner images: (<b>a</b>) sample images of selected street corners taken from a human-view angle; (<b>b</b>) crowd target detection results based on the YOLOv8 algorithm; (<b>c</b>) semantic segmentation results of samples based on the SegFormer algorithm.</p>
Full article ">Figure 10
<p>Historical imagery indicator values for corner buildings.</p>
Full article ">Figure 11
<p>Corner buildings functional indicator values.</p>
Full article ">Figure 12
<p>Spatial form indicator values for street corners.</p>
Full article ">Figure 13
<p>Street corner spatial functional indicator values.</p>
Full article ">Figure 14
<p>Street corner activity population age composition.</p>
Full article ">Figure 15
<p>Street corner activity types of the crowd.</p>
Full article ">Figure 16
<p>Vitality index for each sample street corner.</p>
Full article ">Figure 17
<p>Histogram of residuals. (Bar charts show residual frequencies per interval. The line indicates the normal distribution of residuals.)</p>
Full article ">Figure 18
<p>Scatterplot of residuals.</p>
Full article ">Figure 19
<p>Model regression parameter diagram.</p>
Full article ">
22 pages, 4783 KiB  
Article
Research on Express Crowdsourcing Task Allocation Considering Distribution Mode under Customer Classification
by Xiaohu Xing, Chang Sun and Xinqiang Chen
Sustainability 2024, 16(18), 7936; https://doi.org/10.3390/su16187936 - 11 Sep 2024
Viewed by 413
Abstract
In order to promote the sustainable development of crowdsourcing logistics and control the cost of crowdsourcing logistics while improving the quality of crowdsourcing services, this paper proposes a courier crowdsourcing task allocation model that considers delivery methods under customer classification, with the optimization [...] Read more.
In order to promote the sustainable development of crowdsourcing logistics and control the cost of crowdsourcing logistics while improving the quality of crowdsourcing services, this paper proposes a courier crowdsourcing task allocation model that considers delivery methods under customer classification, with the optimization objective of minimizing the total cost of the crowdsourcing platform. This model adopts two delivery modes: home delivery by crowdsource couriers and pickup by customers. Customers can freely choose the express delivery method according to their actual situation when placing orders, thus better meeting their needs. Based on the customer’s historical express-consumption data, the entropy weight RFM model is used to classify them, and different penalty functions are constructed for different categories of customers to reduce the total delivery cost and improve the on-time delivery of efficient and potential customers. And a Customer Classification Genetic Algorithm (CCGA) was designed for simulation experiments, which showed that the algorithm proposed in this study significantly improved the local search ability, thereby optimizing the delivery task path of express crowdsourcing. This improvement not only improves the delivery timeliness for efficient and potential customers, but also effectively reduces the total delivery cost. Therefore, the research on parcel crowdsourcing task allocation based on customer classification reduces the cost of crowdsourcing delivery platforms and improves customer satisfaction, which has certain theoretical research value and practical-application significance. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

Figure 1
<p>The crowdsourcing task allocation result diagram.</p>
Full article ">Figure 2
<p>Elbow method.</p>
Full article ">Figure 3
<p>Two-dimensional chart of express delivery receipt recency and express delivery receipt frequency.</p>
Full article ">Figure 4
<p>Two-dimensional chart of express delivery monetary fee and express delivery receipt recency.</p>
Full article ">Figure 5
<p>Two-dimensional chart of express delivery monetary fee and express delivery receipt frequency.</p>
Full article ">Figure 6
<p>CCGA flow diagram.</p>
Full article ">Figure 7
<p>Genetic algorithm cross diagram.</p>
Full article ">Figure 8
<p>Genetic algorithm mutation diagram.</p>
Full article ">Figure 9
<p>Optimal distribution route diagram.</p>
Full article ">Figure 10
<p>CCGA population evolution trend diagram.</p>
Full article ">Figure 11
<p>CCGA and GA cost comparison diagram.</p>
Full article ">Figure 12
<p>CCGA and GA comparison of population evolution trend diagram.</p>
Full article ">Figure 13
<p>Comparison of iterative data of CCGA and GA.</p>
Full article ">
22 pages, 10817 KiB  
Article
Leveraging Crowdsourcing for Mapping Mobility Restrictions in Data-Limited Regions
by Hala Aburas, Isam Shahrour and Marwan Sadek
Smart Cities 2024, 7(5), 2572-2593; https://doi.org/10.3390/smartcities7050100 - 7 Sep 2024
Viewed by 628
Abstract
This paper introduces a novel methodology for the real-time mapping of mobility restrictions, utilizing spatial crowdsourcing and Telegram as a traffic event data source. This approach is efficient in regions suffering from limitations in traditional data-capturing devices. The methodology employs ArcGIS Online (AGOL) [...] Read more.
This paper introduces a novel methodology for the real-time mapping of mobility restrictions, utilizing spatial crowdsourcing and Telegram as a traffic event data source. This approach is efficient in regions suffering from limitations in traditional data-capturing devices. The methodology employs ArcGIS Online (AGOL) for data collection, storage, and analysis, and develops a 3W (what, where, when) model for analyzing mined Arabic text from Telegram. Data quality validation methods, including spatial clustering, cross-referencing, and ground-truth methods, support the reliability of this approach. Applied to the Palestinian territory, the proposed methodology ensures the accurate, timely, and comprehensive mapping of traffic events, including checkpoints, road gates, settler violence, and traffic congestion. The validation results indicate that using spatial crowdsourcing to report restrictions yields promising validation rates ranging from 67% to 100%. Additionally, the developed methodology utilizing Telegram achieves a precision value of 73%. These results demonstrate that this methodology constitutes a promising solution, enhancing traffic management and informed decision-making, and providing a scalable model for regions with limited traditional data collection infrastructure. Full article
(This article belongs to the Section Applied Science and Humanities for Smart Cities)
Show Figures

Figure 1

Figure 1
<p>Geographical distribution of mobility restrictions in the WB.</p>
Full article ">Figure 2
<p>Methods for collecting mobility restrictions data.</p>
Full article ">Figure 3
<p>Workflow of importing and integrating data from Survey123 into the ArcGIS Online platform.</p>
Full article ">Figure 4
<p>Methodology of connecting to Telegram, retrieving data, and storing in Pandas DataFrame.</p>
Full article ">Figure 5
<p>Methodology of processing and analysis of Telegram data.</p>
Full article ">Figure 6
<p>Phases of Telegram Arabic text processing using the NLTK.</p>
Full article ">Figure 7
<p>Methodology of analyzing text using the 3W model.</p>
Full article ">Figure 8
<p>Methodology of mapping mobility restrictions.</p>
Full article ">Figure 9
<p>Data quality validation methods.</p>
Full article ">Figure 10
<p>Application of mapping mobility restrictions using Survey123; (<b>a</b>) visual presentation of checkpoints and traffic congestion events on the map; (<b>b</b>) Survey123 checkpoint reporting page with mandatory filed marked with asterisk; (<b>c</b>) detailed information on the reported checkpoint.</p>
Full article ">Figure 11
<p>Distribution of restriction reports.</p>
Full article ">Figure 12
<p>Results of applying HDBSCAN on the traffic congestion reports, showing the distribution of stability values and the visualization of two clusters along with one noise cluster.</p>
Full article ">Figure 13
<p>Ground-truth method application: buffer zone creation around temporary and fixed restrictions, along with validation results for checkpoint and road gate reports.</p>
Full article ">Figure 14
<p>Results of the cross-referencing method using a test dataset from a Telegram group for sharing road traffic information, alongside the outcomes of the 3W model analysis.</p>
Full article ">Figure 15
<p>Distribution of geocoded locations and ground data.</p>
Full article ">Figure 16
<p>Traffic congestion report in Awarta.</p>
Full article ">
119 KiB  
Abstract
Towards a Crowdsourced Digital Coffee Atlas for Sustainable Coffee Farming
by Emma Krischkowsky, Onur Bal, Colin Beyer, David Miller, Manuel Walter and Kirstin Kohler
Proceedings 2024, 109(1), 5; https://doi.org/10.3390/ICC2024-18176 - 5 Sep 2024
Viewed by 85
Abstract
The present work summarizes the results of a 15-week student project addressing the field of sustainable coffee farming. Coffee farmers often lack scientific knowledge concerning the coffee varieties they cultivate, and having grown coffee for generations, they often have limited knowledge concerning the [...] Read more.
The present work summarizes the results of a 15-week student project addressing the field of sustainable coffee farming. Coffee farmers often lack scientific knowledge concerning the coffee varieties they cultivate, and having grown coffee for generations, they often have limited knowledge concerning the names of their coffee varieties used on the global market. This leads to significant disadvantages in market positioning. Consequently, farmers often receive lower prices for their coffee as they cannot accurately determine its true market value. In addition, the effects of climate change force farmers to reconsider the varieties they cultivate, as they cannot exhibit stable yield performance due to the changed climate. If farmers are unaware of the potential quality advantages of different coffee types, this prevents them from optimizing growing conditions specific to their climate. As part of a design thinking-based project course, a team of four design and computer science students at Hochschule Mannheim searched for a solution on how to overcome the aforementioned disadvantages for local coffee farmers with the support of digital technology. Coffee Consulate helped the team by connecting them to farmers around the world and sharing their domain knowledge. The student team’s main idea is to bridge the aforementioned knowledge gap by collecting globally distributed data about coffee species in one worldwide accessible, digital system, allowing farmers to be globally connected. Their concept proposes a digital Coffee Atlas for mobile phones, showing where on the planet and under which climate conditions coffee varieties are grown and how these species are named on the global market. The app allows one to identify coffee plants based on pictures uploaded from farmers’ phones. The team developed an implementation roadmap that considered how to subsequently extend the database behind the Coffee Atlas and how to accelerate the crowdsourcing process. AI-based image recognition trained with pictures taken from a living collection of coffee cultivars, like in the botanical garden of Wilhelma (Stuttgart, Germany), and DNA sequences could serve as an initial step for creating the database. Farmers should be motivated to upload pictures of their plants by additional services provided by the app. Therefore, information about coffee species can be crowdsourced with the help of farmers around the world. Such services could include the recognition of plant health conditions, as well as the estimation of the actual market price of a species based on the identification of coffee varieties or the recommendation of species that are better adapted to the actual or expected climate. In its final implementation, the Coffee Atlas will enhance agricultural practices and economic outcomes for farmers and provide a valuable source of data for researchers around the world. Full article
21 pages, 981 KiB  
Article
A Crowdsourced AI Framework for Atrial Fibrillation Detection in Apple Watch and Kardia Mobile ECGs
by Ali Bahrami Rad, Miguel Kirsch, Qiao Li, Joel Xue, Reza Sameni, Dave Albert and Gari D. Clifford
Sensors 2024, 24(17), 5708; https://doi.org/10.3390/s24175708 - 2 Sep 2024
Viewed by 705
Abstract
Background: Atrial fibrillation (AFib) detection via mobile ECG devices is promising, but algorithms often struggle to generalize across diverse datasets and platforms, limiting their real-world applicability. Objective: This study aims to develop a robust, generalizable AFib detection approach for mobile ECG devices using [...] Read more.
Background: Atrial fibrillation (AFib) detection via mobile ECG devices is promising, but algorithms often struggle to generalize across diverse datasets and platforms, limiting their real-world applicability. Objective: This study aims to develop a robust, generalizable AFib detection approach for mobile ECG devices using crowdsourced algorithms. Methods: We developed a voting algorithm using random forest, integrating six open-source AFib detection algorithms from the PhysioNet Challenge. The algorithm was trained on an AliveCor dataset and tested on two disjoint AliveCor datasets and one Apple Watch dataset. Results: The voting algorithm outperformed the base algorithms across all metrics: the average of sensitivity (0.884), specificity (0.988), PPV (0.917), NPV (0.985), and F1-score (0.943) on all datasets. It also demonstrated the least variability among datasets, signifying its highest robustness and effectiveness in diverse data environments. Moreover, it surpassed Apple’s algorithm on all metrics and showed higher specificity but lower sensitivity than AliveCor’s Kardia algorithm. Conclusions: This study demonstrates the potential of crowdsourced, multi-algorithmic strategies in enhancing AFib detection. Our approach shows robust cross-platform performance, addressing key generalization challenges in AI-enabled cardiac monitoring and underlining the potential for collaborative algorithms in wearable monitoring devices. Full article
Show Figures

Figure 1

Figure 1
<p>A schematic representation of the proposed voting algorithm for AFib detection. The process begins with Lead I ECG data input, as all analyses are performed exclusively on Lead I. Six selected open-source algorithms (Datta et al. [<a href="#B40-sensors-24-05708" class="html-bibr">40</a>,<a href="#B41-sensors-24-05708" class="html-bibr">41</a>], Gliner et al. [<a href="#B42-sensors-24-05708" class="html-bibr">42</a>,<a href="#B43-sensors-24-05708" class="html-bibr">43</a>], Kropf et al. [<a href="#B44-sensors-24-05708" class="html-bibr">44</a>,<a href="#B45-sensors-24-05708" class="html-bibr">45</a>], Baydoun et al., Zabihi et al. [<a href="#B46-sensors-24-05708" class="html-bibr">46</a>], and Soliński et al. [<a href="#B47-sensors-24-05708" class="html-bibr">47</a>]) independently process the ECG signal and generate class labels. These individual classifications are then fed into a random forest-based voting algorithm, which makes the final decision between AFib and non-AFib categories. This approach leverages the strengths of multiple algorithms to enhance the robustness and accuracy of AFib detection.</p>
Full article ">Figure 2
<p>A performance comparison of the voting algorithm against six base algorithms and proprietary solutions across different datasets. (Datta et al. [<a href="#B40-sensors-24-05708" class="html-bibr">40</a>,<a href="#B41-sensors-24-05708" class="html-bibr">41</a>], Gliner et al. [<a href="#B42-sensors-24-05708" class="html-bibr">42</a>,<a href="#B43-sensors-24-05708" class="html-bibr">43</a>], Kropf et al. [<a href="#B44-sensors-24-05708" class="html-bibr">44</a>,<a href="#B45-sensors-24-05708" class="html-bibr">45</a>], Baydoun et al., Zabihi et al. [<a href="#B46-sensors-24-05708" class="html-bibr">46</a>], and Soliński et al. [<a href="#B47-sensors-24-05708" class="html-bibr">47</a>]). <b>Left column</b>: ROC curves. <b>Right column</b>: precision–recall curves. <b>Upper panels</b>: results on Apple Watch DS dataset. <b>Middle panels</b>: results on AliveCor DS2 dataset. <b>Lower panels</b>: results on AliveCor DS3 dataset. The voting algorithms’ performance (orange lines) is compared with that of the Apple/Kardia algorithms (black x) and six base algorithms (blue shapes). In all panels, the operating point of the voting algorithm is indicated by an orange circle. The consistently superior performance of the voting algorithm is demonstrated by its ROC and precision–recall curves encompassing those of the base algorithms across all datasets, showcasing its robustness and generalizability.</p>
Full article ">
27 pages, 1548 KiB  
Article
Sustainable Development of Digital Cultural Heritage: A Hybrid Analysis of Crowdsourcing Projects Using fsQCA and System Dynamics
by Yang Zhang and Changqi Dong
Sustainability 2024, 16(17), 7577; https://doi.org/10.3390/su16177577 - 1 Sep 2024
Viewed by 1688
Abstract
Cultural heritage crowdsourcing has emerged as a promising approach to address the challenges of digitizing and preserving cultural heritage, contributing to the sustainable development goals of cultural preservation and digital inclusivity. However, the long-term sustainability of these projects faces numerous obstacles. This study [...] Read more.
Cultural heritage crowdsourcing has emerged as a promising approach to address the challenges of digitizing and preserving cultural heritage, contributing to the sustainable development goals of cultural preservation and digital inclusivity. However, the long-term sustainability of these projects faces numerous obstacles. This study explores the key configurational determinants and dynamic evolutionary mechanisms driving the sustainable development of cultural heritage crowdsourcing projects, aiming to enhance their longevity and impact. An innovative integration of fuzzy-set qualitative comparative analysis (fsQCA) and system dynamics (SD) is employed, drawing upon a “resource coordination–stakeholder interaction–value co-creation” analytical framework. Through a multi-case comparison of 18 cultural heritage crowdsourcing projects, we identify necessary conditions for project sustainability, including platform support, data resources, knowledge capital, and digitalization performance. The study reveals multiple sufficient pathways to sustainability through configurational combinations of participant motivation, innovation drive, social capital, and social impact. Our system dynamics analysis demonstrates that crowdsourcing project sustainability exhibits significant nonlinear dynamic characteristics, influenced by the interaction and emergent effects of the resource–participation–performance chain. This research offers both theoretical insights and practical guidance for optimizing crowdsourcing mechanisms and sustainable project operations, contributing to the broader goals of sustainable cultural heritage preservation and digital humanities development. The findings provide a roadmap for policymakers and project managers to design and implement more sustainable and impactful cultural heritage crowdsourcing initiatives, aligning with global sustainability objectives in the digital age. Full article
(This article belongs to the Special Issue Cultural Heritage Conservation and Sustainable Development)
Show Figures

Figure 1

Figure 1
<p>“Resource Synergy–Subject Interaction–Value Co-creation” analytical framework. (Source: Author’s original creation based on literature synthesis).</p>
Full article ">Figure 2
<p>Simulation results of system dynamics of digital humanities cultural heritage crowdsourcing projects.</p>
Full article ">
26 pages, 13280 KiB  
Article
Impact of Privacy Filters and Fleet Changes on Connected Vehicle Trajectory Datasets for Intersection and Freeway Use Cases
by Enrique D. Saldivar-Carranza, Rahul Suryakant Sakhare, Jairaj Desai, Jijo K. Mathew, Ashmitha Jaysi Sivakumar, Justin Mukai and Darcy M. Bullock
Smart Cities 2024, 7(5), 2366-2391; https://doi.org/10.3390/smartcities7050093 - 30 Aug 2024
Viewed by 851
Abstract
Commercially available crowdsourced connected vehicle (CV) trajectory data have recently been used to provide stakeholders with actionable and scalable roadway mobility infrastructure performance measures. Transportation agencies and automotive original equipment manufacturers (OEMs) share a common vision of ensuring the privacy of motorists that [...] Read more.
Commercially available crowdsourced connected vehicle (CV) trajectory data have recently been used to provide stakeholders with actionable and scalable roadway mobility infrastructure performance measures. Transportation agencies and automotive original equipment manufacturers (OEMs) share a common vision of ensuring the privacy of motorists that anonymously provide their journey information. As this market has evolved, the fleet mix has changed, and some OEMs have introduced additional fuzzification of CV data around 0.5 miles of frequently visited locations. This study compared the estimated Indiana market penetration rates (MPRs) between historic non-fuzzified CV datasets from 2020 to 2023 and a 5–11 May 2024, CV dataset with fuzzified records and a reduced fleet. At selected permanent interstate and non-interstate count stations, overall CV MPRs decreased by 0.5% and 0.3% compared to 2023, respectively. However, the trend in previous years was upward. Additionally, this paper evaluated the impact on data characteristics at freeways and intersections between the 5–11 May 2024, fuzzified CV dataset and a non-fuzzified 7–13 May 2023, CV dataset. The analysis found that the total number of GPS samples decreased 10% statewide. Of the evaluated 54,284 0.1-mile Indiana freeway, US Route, and State Route segments, the number of CV samples increased for 33.8% and decreased for 65.9%. This study also evaluated 26,291 movements at 3289 intersections and found that the number of available trajectories increased for 28.3% and decreased for 70.4%. This paper concludes that data representativeness is enough to derive most relevant mobility performance measures. However, since the change in available trajectories is not uniformly distributed among intersection movements, an unintended sample bias may be introduced when computing performance measures. This may affect signal retiming or capital investment opportunity identification algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Non-fuzzified waypoints available for analysis and fuzzified waypoints sampled during a 10-min period.</p>
Full article ">Figure 2
<p>Locations of twenty-eight count stations in Indiana.</p>
Full article ">Figure 3
<p>CV penetration in 2024 across ten interstate count stations in Indiana.</p>
Full article ">Figure 4
<p>CV penetration in 2024 across eighteen non-interstate count stations in Indiana.</p>
Full article ">Figure 5
<p>CV penetration comparison over the years across 12 common count stations.</p>
Full article ">Figure 6
<p>Map of Indiana showing routes analyzed—12 interstates, 4 US Routes, and 3 State Routes.</p>
Full article ">Figure 7
<p>Box-and-whisker diagrams of the network level change on the number of journeys by route type and 0.1-mile segment.</p>
Full article ">Figure 8
<p>CFD of the network level change on the number of journeys by route type and 0.1-mile segment.</p>
Full article ">Figure 9
<p>Pareto-sorted analyzed 0.1-mile route segments ranked by their change in journeys.</p>
Full article ">Figure 10
<p>Change in the number of journeys at the 0.1-mile route segment level in Indiana.</p>
Full article ">Figure 11
<p>Box-and-whisker diagrams of the network level change on the number of analyzed vehicle trajectories by intersection and movement.</p>
Full article ">Figure 12
<p>CFD of the network level change on the number of analyzed vehicle trajectories by intersection and movement.</p>
Full article ">Figure 13
<p>Pareto-sorted analyzed movements ranked by their change in analyzed vehicle trajectories.</p>
Full article ">Figure 14
<p>Change in the number of vehicle trajectories analyzed at the intersection level in Indiana.</p>
Full article ">Figure 15
<p>Speed analysis over one mile of I-465 around Indianapolis, Indiana (Note: IL = inner loop, OL = outer loop).</p>
Full article ">Figure 16
<p>LOS estimation for the minor movements at a 12-intersection corridor.</p>
Full article ">
22 pages, 3303 KiB  
Review
Toward Greener Supply Chains by Decarbonizing City Logistics: A Systematic Literature Review and Research Pathways
by Doğukan Toktaş, M. Ali Ülkü and Muhammad Ahsanul Habib
Sustainability 2024, 16(17), 7516; https://doi.org/10.3390/su16177516 - 30 Aug 2024
Viewed by 1132
Abstract
The impacts of climate change (CC) are intensifying and becoming more widespread. Greenhouse gas emissions (GHGs) significantly contribute to CC and are primarily generated by transportation—a dominant segment of supply chains. City logistics is responsible for a significant portion of GHGs, as conventional [...] Read more.
The impacts of climate change (CC) are intensifying and becoming more widespread. Greenhouse gas emissions (GHGs) significantly contribute to CC and are primarily generated by transportation—a dominant segment of supply chains. City logistics is responsible for a significant portion of GHGs, as conventional vehicles are the primary mode of transportation in logistical operations. Nonetheless, city logistics is vital for urban areas’ economy and quality of life. Therefore, decarbonizing city logistics (DCL) is crucial to promote green cities and sustainable urban living and mitigate the impacts of CC. However, sustainability encompasses the environment, economy, society, and culture, collectively called the quadruple bottom line (QBL) pillars of sustainability. This research uses the QBL approach to review the extant literature on DCL. We searched for articles on SCOPUS, focusing on analytical scholarly studies published in the past two decades. By analyzing publication years, journals, countries, and keyword occurrences, we present an overview of the current state of DCL research. Additionally, we examine the methods and proposals outlined in the reviewed articles, along with the QBL aspects they address. Finally, we discuss the evolution of DCL research and provide directions for future research. The results indicate that optimization is the predominant solution approach among the analytical papers in the DCL literature. Our analysis reveals a lack of consideration for the cultural aspect of QBL, which is essential for the applicability of any proposed solution. We also note the integration of innovative solutions, such as crowdsourcing, electric and hydrogen vehicles, and drones in city logistics, indicating a promising research area that can contribute to developing sustainable cities and mitigating CC. Full article
(This article belongs to the Special Issue Green Maritime Logistics and Sustainable Port Development)
Show Figures

Figure 1

Figure 1
<p>Schema framing city logistics operations.</p>
Full article ">Figure 2
<p>Quadruple bottom line (QBL) pillars of sustainability.</p>
Full article ">Figure 3
<p>Keyword trends based on the search in <span class="html-italic">All Fields</span> (2004–2023).</p>
Full article ">Figure 4
<p>Keyword trends based on the search in <span class="html-italic">Title, Abstract, and Keywords</span> (2004–2023).</p>
Full article ">Figure 5
<p>Number of selected articles published in each year.</p>
Full article ">Figure 6
<p>Scholarly journals that published the most on DCL.</p>
Full article ">Figure 7
<p>Countries that published the most on DCL.</p>
Full article ">Figure 8
<p>Keyword co-occurrence network via VOSviewer (version 1.6.20).</p>
Full article ">
29 pages, 2443 KiB  
Article
User Mobility Modeling in Crowdsourcing Application to Prevent Inference Attacks
by Farid Yessoufou, Salma Sassi, Elie Chicha, Richard Chbeir and Jules Degila
Future Internet 2024, 16(9), 311; https://doi.org/10.3390/fi16090311 - 28 Aug 2024
Viewed by 2538
Abstract
With the rise of the Internet of Things (IoT), mobile crowdsourcing has become a leading application, leveraging the ubiquitous presence of smartphone users to collect and process data. Spatial crowdsourcing, which assigns tasks based on users’ geographic locations, has proven to be particularly [...] Read more.
With the rise of the Internet of Things (IoT), mobile crowdsourcing has become a leading application, leveraging the ubiquitous presence of smartphone users to collect and process data. Spatial crowdsourcing, which assigns tasks based on users’ geographic locations, has proven to be particularly innovative. However, this trend raises significant privacy concerns, particularly regarding the precise geographic data required by these crowdsourcing platforms. Traditional methods, such as dummy locations, spatial cloaking, differential privacy, k-anonymity, and encryption, often fail to mitigate the risks associated with the continuous disclosure of location data. An unauthorized entity could access these data and infer personal information about individuals, such as their home address, workplace, religion, or political affiliations, thus constituting a privacy violation. In this paper, we propose a user mobility model designed to enhance location privacy protection by accurately identifying Points of Interest (POIs) and countering inference attacks. Our main contribution here focuses on user mobility modeling and the introduction of an advanced algorithm for precise POI identification. We evaluate our contributions using GPS data collected from 10 volunteers over a period of 3 months. The results show that our mobility model delivers significant performance and that our POI extraction algorithm outperforms existing approaches. Full article
Show Figures

Figure 1

Figure 1
<p>Motivating scenario.</p>
Full article ">Figure 2
<p>Flow of the proposed approach.</p>
Full article ">Figure 3
<p>Location data preprocessing steps.</p>
Full article ">Figure 4
<p>Example of Alice’s graph generation.</p>
Full article ">Figure 5
<p>Influence diagram.</p>
Full article ">Figure 6
<p>Screenshot of the mobile application.</p>
Full article ">Figure 7
<p>Example of data points on the map.</p>
Full article ">Figure 8
<p>Runtime execution for reduced graph creation.</p>
Full article ">Figure 9
<p>Precision scores obtained.</p>
Full article ">Figure 10
<p>Recall scores obtained.</p>
Full article ">Figure 11
<p>F1POI scores obtained.</p>
Full article ">Figure 12
<p>Precision, recall, F1POI score for DJ cluster obtained.</p>
Full article ">
14 pages, 3589 KiB  
Article
Vehicle Localization Using Crowdsourced Data Collected on Urban Roads
by Soohyun Cho and Woojin Chung
Sensors 2024, 24(17), 5531; https://doi.org/10.3390/s24175531 - 27 Aug 2024
Viewed by 363
Abstract
Vehicle localization using mounted sensors is an essential technology for various applications, including autonomous vehicles and road mapping. Achieving high positioning accuracy through the fusion of low-cost sensors is a topic of considerable interest. Recently, applications based on crowdsourced data from a large [...] Read more.
Vehicle localization using mounted sensors is an essential technology for various applications, including autonomous vehicles and road mapping. Achieving high positioning accuracy through the fusion of low-cost sensors is a topic of considerable interest. Recently, applications based on crowdsourced data from a large number of vehicles have received significant attention. Equipping standard vehicles with low-cost onboard sensors offers the advantage of collecting data from multiple drives over extensive road networks at a low operational cost. These vehicle trajectories and road observations can be utilized for traffic surveys, road inspections, and mapping. However, data obtained from low-cost devices are likely to be highly inaccurate. On urban roads, unlike highways, complex road structures and GNSS signal obstructions caused by buildings are common. This study proposes a reliable vehicle localization method using a large amount of crowdsourced data collected from urban roads. The proposed localization method is designed with consideration for the high inaccuracy of the data, the complexity of road structures, and the partial use of high-definition (HD) maps that account for environmental changes. The high inaccuracy of sensor data affects the reliability of localization. Therefore, the proposed method includes a reliability assessment of the localized vehicle poses. The performance of the proposed method was evaluated using data collected from buses operating in Seoul, Korea. The data used for the evaluation were collected 18 months after the creation of the HD maps. Full article
Show Figures

Figure 1

Figure 1
<p>Localization error in <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math> across multiple traversals. (<b>a</b>) Urban road image. (<b>b</b>) HD map with ground-truth driving lane corresponding to the driving lane in (<b>a</b>). (<b>c</b>) Trajectory of <math display="inline"><semantics> <mi mathvariant="bold">p</mi> </semantics></math> for 172 traversals.</p>
Full article ">Figure 2
<p>Localization method for keyframes in a single traversal.</p>
Full article ">Figure 3
<p>Visualization of data acquisition roads and sensor observations. (<b>a</b>) Satellite view of data acquisition roads. (<b>b</b>) The sensing device mounted on the vehicle and the observed traffic landmarks. (<b>c</b>) HD map and landmark measurements associated with a single keyframe.</p>
Full article ">Figure 4
<p>Qualitative results of localization and mapping with environmental changes. (<b>a</b>) Ground-truth driving path with HD map lane markings. (<b>b</b>) Keyframe positions and continuous landmarks from raw data across multiple traversals. (<b>c</b>) The corresponding figure for (<b>b</b>) after applying the proposed method. Black dots represent keyframe positions, and purple dots represent continuous landmarks based on the keyframe pose.</p>
Full article ">Figure 5
<p>Comparison of discrete landmark mapping results between the raw data and the proposed method. Filled red and blue circles represent traffic signs and traffic lights on the HD map, respectively. Correspondingly, the empty red and blue circles represent the mapped observed landmarks.</p>
Full article ">Figure 6
<p>Comparison of continuous landmark mapping results between the raw data and the proposed method. Points of the same color represent continuous landmarks clustered in the same HD map lane marking.</p>
Full article ">
24 pages, 4557 KiB  
Article
A System Design Perspective for Business Growth in a Crowdsourced Data Labeling Practice
by Vahid Hajipour, Sajjad Jalali, Francisco Javier Santos-Arteaga, Samira Vazifeh Noshafagh and Debora Di Caprio
Algorithms 2024, 17(8), 357; https://doi.org/10.3390/a17080357 - 15 Aug 2024
Viewed by 383
Abstract
Data labeling systems are designed to facilitate the training and validation of machine learning algorithms under the umbrella of crowdsourcing practices. The current paper presents a novel approach for designing a customized data labeling system, emphasizing two key aspects: an innovative payment mechanism [...] Read more.
Data labeling systems are designed to facilitate the training and validation of machine learning algorithms under the umbrella of crowdsourcing practices. The current paper presents a novel approach for designing a customized data labeling system, emphasizing two key aspects: an innovative payment mechanism for users and an efficient configuration of output results. The main problem addressed is the labeling of datasets where golden items are utilized to verify user performance and assure the quality of the annotated outputs. Our proposed payment mechanism is enhanced through a modified skip-based golden-oriented function that balances user penalties and prevents spam activities. Additionally, we introduce a comprehensive reporting framework to measure aggregated results and accuracy levels, ensuring the reliability of the labeling output. Our findings indicate that the proposed solutions are pivotal in incentivizing user participation, thereby reinforcing the applicability and profitability of newly launched labeling systems. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

Figure 1
<p>General process of a data labeling system.</p>
Full article ">Figure 2
<p>True (✓)/False (×)-type labeling questions. (<b>a</b>) Considering positive golden items. (<b>b</b>) Considering both positive and negative golden items.</p>
Full article ">Figure 3
<p>Sensitivity analysis of the shape parameter (<span class="html-italic">T</span>) absent incorrect labeling of golden items.</p>
Full article ">Figure 4
<p>Sensitivity analysis of the shape parameter (<span class="html-italic">T</span>) without skipping any golden item.</p>
Full article ">
22 pages, 3266 KiB  
Article
How Can Scientific Crowdsourcing Realize Value Co-Creation? A Knowledge Flow-Based Perspective
by Ran Qiu, Guohao Wang, Liying Yu, Yuanzhi Xing and Hui Yang
Systems 2024, 12(8), 295; https://doi.org/10.3390/systems12080295 - 11 Aug 2024
Viewed by 913
Abstract
Presently, the practice of scientific crowdsourcing still suffers from user loss, platform operational inefficiency, and many other dilemmas, mainly because the process mechanism of realizing value co-creation through interaction between users and platforms has not yet been elaborated. To fill this gap, this [...] Read more.
Presently, the practice of scientific crowdsourcing still suffers from user loss, platform operational inefficiency, and many other dilemmas, mainly because the process mechanism of realizing value co-creation through interaction between users and platforms has not yet been elaborated. To fill this gap, this study takes Kaggle as the research object and explores the realization process and internal mechanism of scientific crowdsourcing value co-creation from the perspective of knowledge flow. The results show that the operation process of Kaggle-based scientific crowdsourcing can be decomposed into five progressive evolutionary stages, including knowledge sharing, knowledge innovation, knowledge dissemination, knowledge application, and knowledge advantage formation. The knowledge flow activates a series of value co-creation activities of scientific crowdsourcing, forming a dynamic evolution and continuous optimization of the value co-creation process that includes the value proposition, value communication, value consensus, and all-win value. Institutional logic plays a key role as a catalyst in the value co-creation of scientific crowdsourcing, effectively facilitating the realization of value co-creation by controlling and guiding the flow of knowledge. The study unlocks the “gray box” from knowledge flow to value co-creation, providing new theoretical support and guidance for further enhancing the value co-creation capacity and accelerating the practice of scientific crowdsourcing. Full article
Show Figures

Figure 1

Figure 1
<p>The core modules and components of the Kaggle platform. Source: Kaggle official website (<a href="https://www.kaggle.com" target="_blank">https://www.kaggle.com</a>, accessed on 1 July 2024); compiled by authors.</p>
Full article ">Figure 2
<p>The operation process of Kaggle’s scientific crowdsourcing competition based on knowledge flow.</p>
Full article ">Figure 3
<p>Competition creation. Source: Kaggle official website (<a href="https://www.kaggle.com" target="_blank">https://www.kaggle.com</a>, accessed on 1 July 2024); compiled by authors.</p>
Full article ">Figure 4
<p>Solution submission and evaluation. Source: <a href="https://www.kaggle.com/competitions/leaderboard" target="_blank">https://www.kaggle.com/competitions/leaderboard</a> (accessed on 1 July 2024).</p>
Full article ">Figure 5
<p>Composition of Kaggle-based scientific crowdsourcing service ecosystem.</p>
Full article ">Figure 6
<p>Correlation between solvers discussion points and competition points. Source: <a href="https://www.kaggle.com/rankings" target="_blank">https://www.kaggle.com/rankings</a> (accessed on 1 July 2024).</p>
Full article ">Figure 7
<p>Solver’s points distribution by (<b>a</b>) Competition; (<b>b</b>) Dataset; (<b>c</b>) Notebook; (<b>d</b>) Discussion. Source: <a href="https://www.kaggle.com/rankings" target="_blank">https://www.kaggle.com/rankings</a> (accessed on 1 July 2024).</p>
Full article ">Figure 8
<p>Solver’s prizes on Kaggle’s scientific crowdsourcing platform. Source: <a href="https://www.kaggle.com/competitions" target="_blank">https://www.kaggle.com/competitions</a> (accessed on 1 July 2024).</p>
Full article ">Figure 9
<p>A theoretical model of the evolutionary process and intrinsic mechanisms of value co-creation in scientific crowdsourcing under knowledge flow.</p>
Full article ">
Back to TopTop