AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

3184 computing and processing Preprints

Related keywords
computing and processing Internet of things (IoT) arm processor text-to-video ai computational supremacy text to video ai computer arithmetic language technology biomedicine Camera Model Identification integer division intelligent transportation systems llm autonomous vehicles stable diffusion drug discovery/target validation split inference language models machine learning mmlu benchmarking convolutional neural network artificial intelligence (ai) cricket world cup automation data science + show more keywords
graph neural networks data analytics multi-edge cooperative computing hydrogen bonds semantic communications scalable performance disparities openai sora engineering profession fpga cricket cloud computing face-deduplication reliability mathematics signal processing and analysis vision transformer elliptic curve real-time scheduling SDN linguistic diversity biometric artificial intelligence geospatial games sustainability quantum computing breast cancer quantum hydrology text analytics open data frontier exploration service meshes sensors image forensics trustworthy ai cognitive behavior therapy Text to video large language models smart city Modified TOPSIS risk factors high-speed arithmetic TOP-SIS quantum entanglement drug-drug interactions pharmacotherapeutics collaborative inference Dataset environmental monitoring quantum machine learning generative artificial intelligence pharmacology microservices applications edge inference explainable AI topsis medicine montgomery ladder nlp distributed-databases responsible ai fault detection gamification computational systems large language model open-set-classification edge computing software defined networks affective computing cross-cultural communication infrastructure management spiking neural network technological innovation k-means cluster euclidean distance virtual reality quantum coherence low-power design real-time and embedded systems components, circuits, devices and systems computational neuroscience Artificial neural networks (ANN) communication, networking and broadcast technologies variable-latency divider deep learning load balancing water general topics for engineers systems integration controller placement transportation space-time transformer
FOLLOW
  • Email alerts
  • RSS feed
Please note: These are preprints and have not been peer reviewed. Data may be preliminary.
On the Exponential Diophantine Equation 7 x − 5 y = z 2
Budee U Zaman

Budee U Zaman

March 04, 2024
In this paper, we address the exponential Diophantine equation 7 x − 5 y = z 2 , seeking non-negative integer solutions for x, y, and z. Using many congruence theorems and Catalan's conjecture, we prove the existence of a single solution. Our analysis shows that (x,y,z)=(0,0,0) is the only possible solution to the problem. We prove the validity of this claim by a thorough analysis of computational methods and concepts from number theory. This outcome advances our knowledge of exponential Diophantine equations and sheds light on how prime numbers and exponentiation interact in these kinds of mathematical investigations.
Evaluating Large Language Models: ChatGPT-4, Mistral 8x7B, and Google Gemini Benchmar...
Kensuke Ono
Akira Morita

Kensuke Ono

and 1 more

March 04, 2024
This study was designed to explore the capabilities of contemporary large language models (LLMs)-specifically, ChatGPT-4, Google Gemini, and Mistral 8x7B-in processing and generating text across different languages, with a focused comparison on English and Japanese. By employing a rigorous benchmarking methodology anchored in the Massive Multitask Language Understanding (MMLU) framework, we sought to quantitatively assess the performance of these models in a variety of linguistic tasks designed to challenge their understanding, reasoning, and language generation capabilities. Our methodology encompassed a diverse range of tests, from simple grammatical assessments to complex reasoning and comprehension challenges, enabling a comprehensive evaluation of each model's linguistic proficiency and adaptability. The key finding of our investigation reveals significant disparities in language performance among the evaluated LLMs, with ChatGPT-4 demonstrating superior proficiency in English, Google Gemini excelling in Japanese, and Mistral 8x7B showcasing a balanced performance across both languages. These results highlight the influence of training data diversity, model architecture, and linguistic focus in shaping the abilities of LLMs to understand and generate human language. Furthermore, our study underscores the critical need for incorporating a more diverse and inclusive range of linguistic data in the training processes of future LLMs. We advocate for the advancement of language technologies that are capable of bridging linguistic gaps, enhancing cross-cultural communication, and fostering a more equitable digital landscape for users worldwide.
Applying Graph Neural Networks in Pharmacology
Ritwik Raj Saxena

Ritwik Raj Saxena

and 1 more

February 26, 2024
Objective: Techniques that are based on artificial intelligence, specifically machine learning, have played a major role in the enhancement of pharmacological methodologies and development of medical treatments, especially those that are individualized or those which fall in the province of precision medicine. In this article, we attempt to examine how graph neural networks have revolutionized certain important aspects of pharmacology.Background: Pharmacological data is replete with unidirectional as well as bidirectional associations, with regards to, for example, drug interactions, patient-centered medicine, precision medicine, multi-omics data analysis, drug discovery, and optimization of experimental processes, and other fields. These associations can be more readily modeled using advanced computational methods and machine learning techniques like graph neural networks. The revolutionary advancements in the field of data mining have further fueled the need to create models that can resolve pharmacological correlations and dependencies into facilely interpretable outcomes. Methods: We conducted a literature review to find those documents which provide relevant information about our objectives. With a comprehensive search plan in place, we sequestered applicable articles and studied them to identify pertinent points that assisted our understanding of graph neural networks as a tool to improvise, automate, and simplify the practical applications in pharmacology and pharmacotherapeutics.Conclusion: The review of relevant research has confirmed our hypothesis that graph neural networks can be used to create an innovative, lasting, and radical departure in pharmaceutical therapeutics. Graph Neural Networks can automate and simplify many tasks based on large and complex datasets which are inherent in pharmacological science. Such techniques can help us achieve innovative methods in therapeutics using extant pharmaceuticals and in the development of new drugs, and therefore bode well for the future of healthcare.
Cloud service meshes: analysis of the least outstanding request load balancing policy...
Andrea Detti

Andrea Detti

and 1 more

February 27, 2024
Service meshes are emerging software frameworks to manage communications among microservices of distributed applications. With a service mesh, each microservice is flanked by an L7 sidecar proxy that intercepts any incoming and outgoing requests for better observability, traffic management, and security. The sidecar proxy uses an application-level load balancing policy to route outbound requests towards possible replicas of destination microservices. A widely used load balancing policy is the Least Outstanding Request (LOR), which routes requests to the microservice replica with the fewest outstanding requests. While the LOR policy significantly reduces request latency in scenarios with a single load balancer, our comprehensive investigation, spanning analytical, simulation, and experimental methodologies, reveals that its effectiveness decreases in environments with multiple load balancers, typical of service meshes serving applications with several microservice replicas. Specifically, the resulting request latency asymptotically tends to that provided by a random load balancing policy as the number of microservice replicas increases. To address this loss in efficacy, we propose a solution based on a new Kubernetes custom resource, named Proxy-Service, offering potential improvements in performance and scalability.
An IoT-Enabled Framework for Smart City Infrastructure Management
Mahmoud Mohamed

Mahmoud Mohamed

and 1 more

February 27, 2024
With rapid urbanization, cities face immense pressures on infrastructure and resources. Uncoordinated management of transportation, energy, water and waste infrastructure leads to inefficiencies, delays and unsustainability. This paper proposes a novel IoT-enabled framework to address these challenges through holistic data-driven management of city infrastructure. While prior works have explored IoT point solutions for specific domains, our integrated framework delivers a comprehensive architecture for citywide infrastructure visibility. The edge computing-based distributed design enables scalable real-time analytics across thousands of heterogeneous assets spread city-wide. Through consolidated storage and analytics, interdependencies between various infrastructure systems can be uncovered to optimize overall city operations. The standardsbased implementation fosters seamless integration of diverse infrastructure technologies. Our unified data management layer provides a single platform for visual intelligence on city-wide infrastructure health to support data-driven planning. We demonstrate the efficacy of the proposed framework through a case study focused on transportation infrastructure management. The results showcase significant enhancements in operational efficiency, sustainability and cost savings across transport assets when managed under the IoT-enabled framework versus traditional siloed approaches. This paper provides city leaders and technologists an implementable blueprint to harness the power of IoT and analytics for transitioning to smarter, sustainable and resident-friendly infrastructure.
Quantum Machine Learning for Controller Placement in Software Defined Networks
Swaraj Shekhar Nande

Swaraj Shekhar Nande

and 4 more

February 27, 2024
Future 6G networks will be enabled by full softwarization of network functions & operations and in-network intelligence for self-management and orchestration. However, the intelligent management of a softwarized network will require massive data mining, analytics, and processing. That is why it is fundamental to find additional resources like quantum technologies to help achieve 6G key performance indicators. Quantum properties provide quantum computers to run a quantum algorithm with lesser queries. Quantum Machine Learning (QML) studies machine learning techniques on quantum computers. In this work, we use a QML algorithm to solve the controller placement problem for a multi-controller Software Defined Network (SDN). The network delay depends on where the controller is located, thus, it is critical to choose controllers at positions leading to minimize latency between the controllers and their associated switches. We consider an SDN architecture which is in its early stage of installation where the network nodes are deployed but connections will be established after obtaining controller locations, which results in the reduction of the overall controller to switch delay. By using different types of datasets, i.e., uniformly distributed and Gaussian distributed points, the experimental results show that the QML algorithm accelerates the SDN clustering methods (which are used to resolve the control placement problem) compared to those of the classical machine learning algorithm (like K-means) with comparable latency.
Advanced Learning Technologies for Intelligent Transportation Systems: Prospects and...

Ruhul Amin Khalil

and 5 more

February 27, 2024
Intelligent Transportation Systems (ITS) operate within a highly intricate and dynamic environment characterized by complex spatial and temporal dynamics at various scales, further compounded by fluctuating conditions influenced by external factors such as social events, holidays, and weather conditions. Navigating the intricacies of modeling the intricate interaction among these elements, creating universal representations, and employing them to address transportation issues presents a significant endeavor. Yet, these intricacies comprise just one facet of the multifaceted trials confronting contemporary ITS. This paper offers an all-encompassing survey exploring Deep learning (DL) utilization in ITS, primarily focusing on practitioners' methodologies to address these multifaceted challenges. The emphasis lies on the architectural and problem-specific factors that guide the formulation of innovative solutions. In addition to shedding light on the state-of-the-art DL algorithms, we also explore potential applications of DL and large language models (LLMs) in ITS, including traffic flow prediction, vehicle detection and classification, road condition monitoring, traffic sign recognition, and autonomous vehicles. Besides, we identify several future challenges and research directions that can push the boundaries of ITS, including the critical aspects of explainability, transfer learning, hybrid models, privacy and security, and ultra-reliable low-latency communication. Our aim for this survey is to bridge the gap between the burgeoning DL and transportation communities. By doing so, we aim to facilitate a deeper comprehension of the challenges and possibilities within this field. We hope that this effort will inspire further exploration of fresh perspectives and issues, which, in turn, will play a pivotal role in shaping the future of transportation systems.
Cricket ODI World Cup 2023 Prediction Using TOPSIS Methodology

Broti Mondal Bonya

and 3 more

February 27, 2024
Current research uses TOPSIS to evaluate 14 Cricket World Cup 2023 teams. Data from the Espn Cricinfo website was used in this analysis. A comprehensive set of criteria (P1 to P11) was used to evaluate each squad, encompassing various game aspects. A numerical labeling system (A1 to A14) and parameter system (P1 to P11) were used to idenMfy team names and qualiMes more efficiently. The research calculates the normalized matrix and weighted matrix, then finds the best and worst values using TOPSIS. A normalized matrix creates a consistent and uniform framework for evaluaMng and comparing factors, ensuring imparMality and jusMficaMon. In contrast, the weighted matrix integrates each criterion's proporMonal importance into the evaluaMon process. For each criterion, the ideal best and ideal worst values indicate the best and worst performance. The TOPSIS analysis placed Australia first, Bangladesh second, and New Zealand third. In fourth and fiXh place were India and Sri Lanka. Afghanistan, West Indies, England, South Africa, and Pakistan rated sixth to tenth. Nepal was tenth, Ireland, the US, and Zimbabwe fourteenth.To understand team performance, the TOPSIS technique must be accepted. It is important to acknowledge that the Cricket World Cup 2023 results may vary owing to many factors. This study provides a systematic and comprehensive approach to team performance, making it a useful resource for cricket fans and experts interested in the event's competitive dynamics.
Accelerating Spiking Neural Networks with Parallelizable Leaky Integrate-and-Fire Neu...
Sidi Yaya Arnaud Yarga

Sidi Yaya Arnaud Yarga

and 1 more

February 27, 2024
Spiking Neural Networks (SNNs) express higher biological plausibility and excel at learning spatiotemporal features while consuming less energy than conventional Artificial Neural Networks (ANNs), particularly on neuromorphic hardware. The Leaky Integrate-and-Fire (LIF) neuron stands out as one of the most widely used spiking neurons in deep learning. However, its sequential information processing leads to slow training on lengthy sequences, presenting a critical challenge for real-world applications that rely on extensive datasets. This paper introduces the Parallelizable Leaky Integrate-and-Fire (ParaLIF) neuron, which accelerates SNNs by parallelizing their simulation over time, for both feedforward and recurrent architectures. When compared to LIF in neuromorphic speech, image and gesture classification tasks, ParaLIF demonstrates speeds up to 200 times faster and, on average, achieves greater accuracy with similar sparsity. Integrated into a state-of-the-art architecture, ParaLIF's accuracy matches the highest reported performance in the literature on the Spiking Heidelberg Digits (SHD) dataset. These findings highlight ParaLIF as a promising approach for the development of rapid, accurate and energy-efficient SNNs, particularly well-suited for handling massive datasets containing long sequences.
Fluid Insights: Navigating Water's Quantum Potential in Computing
Douha Jerbi

Douha Jerbi

February 27, 2024
Water, the ubiquitous yet enigmatic molecule of life, harbors within its structure a realm of quantum mysteries that are increasingly captivating the attention of scientists in the burgeoning field of quantum computing. In this review, we embark on a journey through the fluid landscapes of quantum mechanics, where water molecules emerge as pivotal players in the quest for computational supremacy. From unraveling the intricate dance of hydrogen bonds to probing the delicate interplay of quantum coherence and entanglement, we explore the fundamental principles underlying water's quantum behavior and its profound implications for computing. As researchers navigate the uncharted waters of quantum hydrology, they uncover new vistas of opportunity, from harnessing water's quantum properties for computing advances to envisioning innovative applications in environmental monitoring and beyond. Through this synthesis of theoretical insights and experimental endeavors, we glimpse the promise of a future where quantum water computing stands poised to revolutionize our understanding of nature and reshape the technological landscape. Join us as we dive deep into the quantum symphony of water molecules and chart a course toward unprecedented frontiers in computation and discovery.
Attention-aware Semantic Communications for Collaborative Inference

Jiwoong Im

and 5 more

March 04, 2024
We propose a communication-efficient collaborative inference framework in the domain of edge inference, focusing on the efficient use of vision transformer (ViTs) models. The partitioning strategy of conventional collaborative inference fails to reduce communication cost because of the inherent architecture of ViTs maintaining consistent layer dimensions across the entire transformer encoder. Therefore, instead of employing the partitioning strategy, our framework utilizes a lightweight ViT model on the edge device, with the server deploying a complicated ViT model. To enhance communication efficiency and achieve the classification accuracy of the server model, we propose two strategies: 1) attention-aware patch selection and 2) entropy-aware image transmission. Attention-aware patch selection leverages the attention scores generated by the edge device’s transformer encoder to identify and select the image patches critical for classification. This strategy enables the edge device to transmit only the essential patches to the server, significantly improving communication efficiency. Entropy-aware image transmission uses min-entropy as a metric to accurately determine whether to depend on the lightweight model on the edge device or to request the inference from the server model. In our framework, the lightweight ViT model on the edge device acts as a semantic encoder, efficiently identifying and selecting the crucial image information required for the classification task. Our experiments demonstrate that the proposed collaborative inference framework can reduce communication overhead by 68 % with only a minimal loss in accuracy compared to the server model.
CoRaiS: Lightweight Real-Time Scheduler for Multi-Edge Cooperative Computing
Yujiao Hu

Yujiao Hu

and 6 more

February 27, 2024
Multi-edge cooperative computing that combines constrained resources of multiple edges into a powerful resource pool has the potential to deliver great benefits, such as a tremendous computing power, improved response time, more diversified services. However, the mass heterogeneous resources composition and lack of scheduling strategies make the modeling and cooperating of multi-edge computing system particularly complicated. This paper first proposes a system-level state evaluation model to shield the complex hardware configurations and redefine the different service capabilities at heterogeneous edges. Secondly, an integer linear programming model is designed to cater for optimally dispatching the distributed arriving requests. Finally, a learning-based lightweight real-time scheduler, CoRaiS, is proposed. CoRaiS embeds the real-time states of multi-edge system and requests information, and combines the embeddings with a policy network to schedule the requests, so that the response time of all requests can be minimized. Evaluation results verify that CoRaiS can make a high-quality scheduling decision in real time, and can be generalized to other multi-edge computing system, regardless of system scales. Characteristic validation also demonstrates that CoRaiS successfully learns to balance loads, perceive real-time state and recognize heterogeneity while scheduling.
Design, Implementation and Evaluation of a New Variable Latency Integer Division Sche...
Marco Angioli

Marco Angioli

and 6 more

February 27, 2024
Integer division is key for various applications and often represents the performance bottleneck due to its inherent mathematical properties that limit its parallelization. This work proposes four 32bit data-dependent-latency division schemes, derived from the classic non-performing restoring division algorithm. The proposed technique exploits the relationship between the number of leading zeros in the divisor and in the partial remainder to dynamically detect and skip those iterations that result in a simple left shift. While a similar principle has been exploited in previous works, the proposed approach outperforms existing variable latency divider schemes in average latency and power consumption. We detail the algorithm and its implementation in four variants, offering versatility for the specific application requirements. For each variant, we report the average latency evaluated with different benchmarks, and then we analyze the synthesis results for both FPGA and ASIC deployment, reporting clock speed, average execution time, hardware resources, and energy consumption, compared with existing fixed and variable latency dividers.
Deduplication of Identities Using Similarity Search in a Scalable Vector Database
Shraddha Surana

Shraddha Surana

and 3 more

February 27, 2024
Identity systems increasingly use biometrics to register and uniquely identify individuals. Governments use them to identify and authenticate citizens for voter-enrollment, socialwelfare, border-control, KYC, and healthcare. It is therefore essential to ensure people are not registered multiple times and duplicates are discovered promptly to avoid frauds. This paper proposes a framework for building a scalable deduplication system using facial biometrics and open-source tools. It examines the use of the open-source ArcFace algorithm to create embeddings of representative facial images and the Milvus database to quickly search through millions of images. Such systems will help ensure that duplicate identities are not registered in an identity enrollment system. Based on many experiments and combinations of different parameters, the authors achieve 99.79% accuracy, an F1-score of 89.44%, a false positive identification rate (FPIR) of 0.1%, and a false negative identification rate (FNIR) of 0.1%. This work aims to provide the potential configurations, architecture, parameters, and their effect on accuracy and speed for implementing a highly scalable deduplication system. They elaborate on the impact of each parameter on accuracy and performance. Readers can use this analysis to make an informed decision on the best architecture and combination of parameters for their use case.
Efficient Error Detection Cryptographic Architectures Benchmarked on FPGAs for Montgo...
Kasra Ahmadi
Saeed Aghapour

Kasra Ahmadi

and 3 more

February 27, 2024
Elliptic curve scalar multiplication (ECSM) is a fundamental element of pre-quantum public key cryptography, which is the predominant choice for public key cryptography. ECSM implementations on deeply-embedded architectures and Internet-of-nano-Things have been vulnerable to both permanent and transient errors, as well as fault attacks. Consequently, error detection is crucial. In this work, we present a novel algorithm-level error detection scheme on Montgomery ladder often used for a number of elliptic curves featuring highly efficient point arithmetic, known as Montgomery curves. Our error detection simulations achieve high error coverage on loop abort and scalar bit flipping fault model utilizing binary tree data structure. Assuming n is the size of the private key, the overhead of our error detection scheme is O(n). Finally, we conduct a benchmark of our suggested error detection scheme on both ARMv8 and FPGA platforms to illustrate the implementation and resource utilization. Deployed on Cortex-A72 processors, our proposed error detection scheme maintains a clock cycle overhead of less than 3%. Additionally, integrating our error detection approach into FPGAs, including AMD/Xilinx Zynq Ultrascale+ and Kintex Ultrascale+, results in comparable throughput and less than 1% increase in area compared to the original hardware implementation. We note that we envision using the proposed architectures in the post-quantum cryptography (PQC) based on elliptic curves.
LLM Potentiality and Awareness: A Position Paper from the Perspective of Trustworthy...
Iqbal H. Sarker

Iqbal H. Sarker

February 27, 2024
Large Language Models (LLMs) are an exciting breakthrough in the rapidly growing field of artificial intelligence (AI), offering unparalleled potential in a variety of application domains such as finance, business, healthcare, cybersecurity, and so on. However, concerns regarding their trustworthiness and ethical implications have become increasingly prominent as these models are considered black-box and continue to progress. This position paper explores the potentiality of LLM from diverse perspectives as well as the associated risk factors with awareness. Towards this, we highlight not only the technical challenges but also the ethical implications and societal impacts associated with LLM deployment emphasizing fairness, transparency, explainability, trust and accountability. We conclude this paper by summarizing potential research scopes with direction. Overall, the purpose of this position paper is to contribute to the ongoing discussion of LLM potentiality and awareness from the perspective of trustworthiness and responsibility in AI.
Visualize deforestation levels with geospatial images & Amazon SageMaker using Se...
Parth Girish Patel
Ishneet Kaur Dua

Parth Girish Patel

and 1 more

February 23, 2024
This article demonstrates geospatial analysis techniques to visualize and quantify deforestation using Sentinel-2 satellite imagery. It leverages Amazon SageMaker and open datasets from the Amazon Sustainability Data Initiative (ASDI) to process time series imagery capturing landscape changes related to wildfires near Paradise, California. After configuring access to the Sentinel-2 data registry, bounding boxes and date ranges isolate relevant pre-treatment and post-treatment scenes bracketing major fire events. Comparative visualization highlights patterns of healthy forest persistence versus zones of more complete canopy removal post-fire. Suggestion is to move such analytical routines into automated pipeline to enable scalable automated deforestation mapping as new Sentinel-2 observations become available over time. Overall, this article demonstrates core techniques for leveraging cloud-based geospatial data and computing tools to derive actionable intelligence maps and indicators pertinent to sustainability challenges like wildfire impacts and climate adaptation. 
A Machine Learning method to quantify tissue quality and correct bias due to preanaly...
Claudio Córdova

Claudio Córdova

and 6 more

February 22, 2024
The quality assessment of biomaterials in pathological anatomy is crucial for the optimal diagnosis and treatment of conditions like cancer. This is exemplified in the immunohistochemistry profiling of the human epidermal growth factor receptor 2 (HER2) in breast cancer. Therefore, it is relevant to understand how preanalytical processes, such as post-surgery handling and fixation quality, impact biomaterial quality and diagnostic accuracy. This study investigates first the influence of fixation steps on the performance of HER2 diagnosis. Then a quantitative and automated approach is proposed to correct these biases. This approach is derived from a previous supervised Machine Learning model. The method, which employs a high-performance logistic model, has been further enhanced with a compensation strategy based on tissue quality. This enhancement utilizes a correction derived from a Tissue Quality Index (TQI) to fine-tune the input parameters of the classification model (referred to as TQI-enhancer). Results, obtained from 60 quality control samples with Vimentin and 75 HER2 classification samples, first demonstrate that cold ischemia and fixation times lead to significant changes in immunoreactivity within a short period. Second, adjusting specific parameters quantified in HER2 samples through automated image analysis based on the TQI-Enhancer equation exhibits an improved correlation with the reference diagnosis. This adjustment significantly enhances the classification performance of the logistic classifier in ML-based diagnosis compared to uncompensated data with improved AUC values from 0.84 to 0.93. We anticipate that implementing similar strategies will enhance the performance of digital pathology techniques, ultimately leading to the development of robust diagnostic classifiers for cases of aggressive breast cancer. By analyzing the association between biomarkers like HER2 with patients' clinical outcomes, these classifiers are expected to provide invaluable insights.  
Generate Impressive Videos with Text Instructions: A Review of OpenAI Sora, Stable Di...
Enis Karaarslan

Enis Karaarslan

and 1 more

February 22, 2024
A document by Enis Karaarslan . Click on the document to view its contents.
IoT-oriented Artificial Neural Network Optimization Through Tropical Pruning

Lluc Crespí-Castañer

and 6 more

February 22, 2024
This work delves into the exploration of optimizing Multilayer Perceptrons (MLP) or the dense layers of other sorts of Deep Neural Networks when they are aimed at edge computing applications such as Internet of Things (IoT) devices, very limited in resources at the edge. The proposed optimization approach consists of generating a pruning mask for the hidden dense layers of the original neural network by using auxiliary dense Morphological Neural Networks (MNN). These MNN have shown a notable efficiency when it comes to the process of pruning, resulting in a significant decrease in the overall number of connections and a low cost in terms of accuracy degradation. The effectiveness of this new pruning methodology has been explained in detail and validated for two widely used datasets as MNIST and Fashion MNIST and two very well-known neural networks such as LeNet-5 and LeNet-300-100. Subsequently, the performance of these pruned neural networks has been assessed using an IoT hardware platform. The experimental results have outperformed other contemporary state-of-the-art pruning techniques, in terms of power efficiency and processing speed for a similar percentage of weight reduction, all while maintaining minimal impact on overall accuracy. In addition, a custom software tool has been developed to generate a C code designed to optimize the inference of these pruned networks on IoT edge devices. These findings hold important implications for advancing the development of efficient and scalable deep learning models that are specifically tailored to meet the demands of edge computing applications.
SCMI30-IITRPR: SMARTPHONE CAMERA MODEL IDENTIFICATION DATASET COMPRISING BOTH SIMILAR...

Kapil Rana

and 5 more

February 22, 2024
We present SCMI30-IITRPR, a dataset for smartphone camera model identification (CMI) performance assessment comprising 9937 diverse scene images collected using 30 different camera models. Importantly, to allow assessment of CMI performance under different application settings where either similar or random content images may be available across the camera models, SCMI30-IITRPR provides images grouped in two sets: one set with similar image content and another with random image content. SCMI30-IITRPR therefore overcomes a key limitation of prior datasets that provided either images with random or similar content but not both. Additionally, SCMI30-IITRPR also allows researchers to test the robustness of CMI techniques under test conditions mismatched with the training and to explore alternative data selection approaches for more robust training. We present benchmarks of five CMI methods on the SCMI30-IITRPR dataset highlighting the facts that significant performance variations can be encountered under a mismatch between training and testing scenarios and that training datasets that merge images with similar and random content offer the most robustness.
A Dynamic and Efficient Self-Certified Authenticated Group Key Agreement Protocol for...
Xuefei Cao

Xuefei Cao

and 5 more

February 20, 2024
The development of Internet of things has spawned the vehicular ad hoc network (VANET), which facilitates the safe and comfortable driving. The communications in VANET should be protected to deter against message leakage and modifications. To solve the security issues in VANET communications, we present a dynamic and efficient authenticated group key agreement (AGKA) protocol SC-AGKA with conditional privacy employing the self-certified cryptosystem, and prove its security based on the computational Diffie-Hellman problem. Our SC-AGKA protocol establishes a group key among multiple group users and achieves conditional privacy for the group users. Based on our SC-AGKA protocol, we propose an authentication and group key agreement protocol applying the design in VANET. Additionally, we also notify that through performance comparisons, our protocol has higher security and efficiency in computation and communication compared to other AGKA designs.
Time Object Model: A New Model for HTML
Andy Wang

Andy (Hui) Wang

February 20, 2024
Today modern web site accelerated by scripts, but the foundation, web page its self is still a static structure. Document Object Model (DOM) represents the structure of web page. Here we show a new approach: It is possible to put timetree and DOM together to shape a new structure named Time Object Model. TOM represents not only a static page but also a dynamic stream. We believe the best way for using TOM is to embed it into a HTML page in real time without changing the existence, it is the only way works now.
Immersive and User-Adaptive Gamification in Cognitive Behavioural Therapy for Hypervi...
Saskia Davies

Saskia Davies

February 20, 2024
This work proposes a novel combination of behavioural-tracking sensors and immersive virtual reality in a gamified proof-of-concept prototype, which demonstrates affective treatment concepts for hypervigilance symptoms. A number of limitations have been identified in current approaches, prompting more advanced techniques that efficiently target hypervigilance at an individual patient level. In response, we developed a virtual reality first-person shooter that responds to inertial user behaviour in a way that aims to combat detrimental symptoms, proposed as an exploratory investigation into innovative technology and its potential to maximise cognitive behavioural therapy outcomes for hypervigilance treatment. The prototype is evaluated through interactive user studies with 22 participants, gathering a large volume of qualitative data regarding participant experiences and opinions after use. Rigorous thematic analysis finds that participants can independently identify the cognitive behavioural therapy purpose of the intervention without prior knowledge of such intentions, and relate efficacious approaches from the literature to their own experiences. Despite prospective apprehension, themes also demonstrate widespread adherence and acceptance of such approaches to hypervigilance treatment, alongside perceived effectiveness both of experienced outcomes and future potential. These results support the validity of combining such technologies in the context of cognitive behavioural therapy interventions, such that the standard of future interventions may be improved.
← Previous 1 2 3 4 5 6 7 8 9 … 132 133 Next →
Back to search
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy