AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

3184 computing and processing Preprints

Related keywords
computing and processing survival rate attention Multicultural Education amls fraud transaction detection. regression model n-trailers vehicles brain tumor bioengineering Stigma in ADHD neural networks Social Skills Improvement nonlinear model predictive controller alzheimer's disease seer classification model Educational Settings adhd Personalized Interaction Keras training models Adult ADHD Effective Connectivity CNN (Convolution Neural Network) + show more keywords
Neuroimaging Data web applications frameworks obstacle avoidance network training strategy financial fraud scalability classical communication TensorFlow python generative artificial intelligence network security Xception Fraud detection TFOD (Tensorflow Object Detection) Cancer survival analysis nonstationarity personalized learning latency reduction noisy channels Mental Health Technology Behavioral Disorders Youth Empowerment Technology in Education machine learning inertial-based posture recognition Accessibility in Healthcare glioblastoma brain connectivity Business Recommendations Dynamic Bayesian Networks aerospace OPS (online proctoring system) explainable AI Chatbots in Education ground truth Backpropagation Digital Therapeutics Saudi cybersecurity assessment neurodegenerative disease shifted-patch Bridging Cultural and Linguistic Diversity Gaps in Education through AI-Enabled Problem-Based Learning Conversational AI optimization deep learning models upsampling method hyperparameter tuning resource allocation localization Structure Learning engineered materials, dielectrics and plasmas gradient centralization reinforcement learning fields, waves and electromagnetics edge computing Saudi community cybercrime domain variational algorithms dualdomain learning Food Freshness Classification fdg-pet pytorch neuroscience cultural diversity Digital Health Interventions conversational ai Image recognition 5g networks Transfer learning AI Chatbots classical-quantum computing Mental Health Support classification AI in Education explainable artificial intelligence robotics and control systems fmri data Problem-Based Learning (PBL) signal processing and analysis Inclusive Learning Environments vision transformer Federated Learning cybercrime threats of big data patterns artificial intelligence face verification Real-Time Analytics efficient motion controller communication, networking and broadcast technologies Business Insights deep learning sparse-view ct dense neural networks integration weighted classifier AI-driven Therapy Language Barriers RNN photonics and electrooptics generative ai general topics for engineers cybercrime classifiers’ competencies swin transformer Multilingual Education Convolutional Neural Networks transportation data privacy academic performance quantum channels Behavioral Modification computer vision crop recommendation large language models classifiers capabilities on community cybercrimes evaluating community cybercrime risks bayesian network
FOLLOW
  • Email alerts
  • RSS feed
Please note: These are preprints and have not been peer reviewed. Data may be preliminary.
RMTnet: Recurrence meet Transformer for Alzheimer's disease diagnosis using FDG-PET
Uttam Khatri

Uttam Khatri

and 1 more

January 22, 2024
Alzheimer's disease (AD) is a chronic, degenerative brain disease that affects memory, thinking, and retention. Early diagnosis of AD is essential for effective therapy before clinical symptoms. Positron Emission Tomography (PET) measures the decline in glucose concentration in the temporoparietal association cortex. By identifying meaningful features in medical images, deep learning is an artificial intelligence technology used to identify and predict disease. A convolutional neural network (CNN) is an example of an effective application of deep learning for diagnosing Alzheimer's disease. In several diagnostic imaging classifications, Vision Transformers (ViT) recently outperformed CNN. Transformers allow attention to be drawn to all previously computed elements in a sequence so that they exhibit minimal inductive bias toward learning compression representations over time. A slow, naturally iterative stream tries to learn a specialized, compressed representation by grouping K time-step parts into a single representation decomposed into multiple vectors. With the proposed approach, we intend to achieve Transformer expressiveness while promoting improved representational structure and slow in-stream compression for ADNI dataset. For visual perception and sequential decision-making tasks, we demonstrate the advantages of the proposed technique in terms of improved sample efficiency and generalization performance over other competitive benchmarks. Accordingly, we propose a technique to identify dementia by combining 18F-Florbetaben PET scan with ViT. The results show that the proposed method can be successfully applied in the field of brain imaging and may offer a potential way to use the pre-trained model in dataintensive applications. Moreover, compared with most of the current studies, the proposed cross-domain transfer learning technique can achieve comparable classification performance. According to the experimental findings, the suggested model has an accuracy of 91.08% when applied to the ADNI database for AD/CN classification task. Then, to explain the findings, we offer an Explainable Artificial Intelligence paradigm using attention maps.
Training vision transformer with gradient centralization optimizer for Alzhemier's di...
Uttam Khatri

Uttam Khatri

and 1 more

January 22, 2024
Increasingly common in the aging population, Alzheimer's disease (AD) is a neurodegenerative disorder. Early identification and care are the best ways to prevent AD. In several diagnostic imaging classifications and multiple groups of medical images, transformers (ViT) have recently shown classification results that are superior to CNN. ViT, which tracks the direct associations between images, may be more useful for brain image analysis than CNN since the brain is a complicated system with interconnected parts. Traditional ViT is unable to attend the target class efficiently due to iterative attention brought on by a large constant temperature factor and inductive bias. This work suggests shifted Patch Tokenization (SPT) and position encoding using CoordConv Position Encoding (CPE) to reduce the locality inductive bias of ViT. Moreover, we propose a gradient centralization technique with Adam optimizer for better and faster training. We demonstrate qualitatively how each strategy serves a more crucial setting and helps to identify Alzheimer's disease. The findings of the experiment show that using the suggested approach for distinguishing AD from HC yielded a classification accuracy of 92.30% with a sensitivity of 95.31% and a specificity of 91.45%, which made it the most cutting-edge technique in terms of diagnostic accuracy. These findings have shown the clinical relevance of the suggested approaches for identifying AD and have proven their efficacy.
BIRD: Business Insights and Recommendations Developer using Large Language Models
Sarathbabu

Sarathbabu Karunanithi

January 22, 2024
Business recommendations and proposals start with asking business questions, insights from answering those questions, recommendations from those insights and final proposals or recommendations for implementation. In this paper, we present an end-to-end solution framework called BIRD (Business Insights and Recommendations Developer) which implements Large Language Models (LLMs) as a major part of this business analysis cycle in developing business questions, extracting insights, and providing recommendations in an end-to-end automated process. This framework also allows user interaction at any step for additional context or commands.
Harnessing the Synergy: Federated Learning Meets Edge Computing in 5G Ecosystems
Elizabeth Ango Fomuso Ekellem

Elizabeth Ango Fomuso Ekellem

January 22, 2024
In the era of rapid digital transformation, the integration of Federated Learning (FL) with Edge Computing in 5G networks emerges as a pivotal innovation, offering a new paradigm for data processing and intelligence distribution. This paper delves into the core concepts of FL and its symbiotic relationship with Edge Computing within 5G infrastructures. We examine the unique challenges such as data privacy, computational resource management, and network reliability, alongside the dynamic opportunities that this integration presents. Through a comprehensive review of current research and developments, this study not only highlights the technological advancements but also sheds light on the socioeconomic impacts of FL in 5G Edge Computing. By offering perspectives from both industry and academia, the paper aims to chart a course for future research and implementation strategies, paving the way for a more connected and intelligent world.
Enhancing Multicultural and Multilingual Education through Problem-Based Teaching wit...
Elizabeth Ango Fomuso Ekellem

Elizabeth Ango Fomuso Ekellem

January 22, 2024
This review explores the integration of conversational AI, particularly chatbots like ChatGPT, in problem-based teaching methodologies within diverse educational settings. The rapid advancement in AI technology offers unique opportunities for addressing the challenges faced by students from varied cultural and linguistic backgrounds. By focusing on problem-based learning (PBL) strategies enhanced by conversational AI, this paper examines how AI tools can facilitate personalized learning experiences, promote cultural understanding, and overcome language barriers. The review synthesizes current research findings, highlighting the effectiveness of AI in creating inclusive and adaptive learning environments. Special attention is given to the potential of these technologies in fostering critical thinking, collaboration, and problem-solving skills among students with diverse needs. The paper also addresses the ethical considerations and challenges in implementing AI in educational contexts.
Leveraging Conversational AI for Adult ADHD Management: Enhancing Social Skills and D...
Elizabeth Ango Fomuso Ekellem

Elizabeth Ango Fomuso Ekellem

January 18, 2024
This study aims to explore the efficacy of conversational AI technologies in assisting adults living with ADHD, diagnosed or undiagnosed, in coping with daily challenges and improving social skills. While ADHD is often associated with youth, many adults also struggle with its symptoms, which can include disorganization, impulsivity, and difficulties in maintaining relationships. This research will investigate how AI chatbots, through personalized interactions and behavioral modification strategies, can aid in managing these symptoms. The study will also consider factors to increase accessibility and engagement with these AI tools among adults with ADHD, addressing potential stigmas and technological barriers. Through a systematic review of current literature and case studies, this research aims to provide insights into the development of AI tools tailored for adult ADHD support, contributing to their social and functional well-being.
Enhancing Mental Health and Academic Performance in Youth with ADHD and Related Disor...
Elizabeth Ango Fomuso Ekellem

Elizabeth Ango Fomuso Ekellem

January 18, 2024
This research explores the potential of conversational artificial intelligence (AI) as a tool to support children and young adults with Attention Deficit Hyperactivity Disorder (ADHD), compulsive disorders, and other mental health challenges, particularly in educational settings. Recognizing the unique difficulties this demographic faces, the study aims to investigate how conversational AI can provide structured daily assistance, improve mental well-being, and enhance academic performance. The research draws on various interdisciplinary studies, including those focused on empathy-driven AI, AI in psychiatry, and chatbots in mental health. Through a systematic analysis of existing literature and case studies, this research endeavors to provide a comprehensive understanding of the effectiveness of conversational AI in supporting youth with ADHD and related disorders, offering insights into future developments in this field.
Weber-Maxwell electrodynamics: classical electromagnetism in its most compact and pur...
Steffen Kühn

Steffen Kühn

January 18, 2024
Weber-Maxwell electrodynamics is a modernized, compressed, cleansed and, in many respects, advantageous representation of classical electrodynamics that results from the Liénard-Wiechert potentials. In the non-relativistic domain, it is compatible with both Maxwell's electrodynamics and Weber electrodynamics. It is suitable for all electrical engineering tasks, ranging from electrical machines to radar and high-frequency technologies. Weber-Maxwell electrodynamics also simplifies access to quantum physics and other areas of modern physics, such as optics and atomic physics. Particular advantages of Weber-Maxwell electrodynamics are its simple and fast computability in computer calculations and, as it is based on point charges, in the simulation of plasmas. The latter is particularly important for fusion research. Moreover, Weber-Maxwell electrodynamics is also highly suited to academic and post-primary education, as it allows an easy comprehension of both magnetism and electromagnetic waves. Due to the novelty of Weber-Maxwell electrodynamics, there are currently no articles that summarize its most important aspects. The present article aims to achieve this.
Peshnaja: a framework for predicting survivability of glioblastoma patients using ML...

Aleema Ashfaq

and 7 more

January 18, 2024
Glioblastoma is a common and fatal tumor presenting a poor survival rate. To choose the best course of treatment, patients and providers need to predict the survival rate of patients. Historically, statistical methods have helped analyze clinical features to forecast survival, while recently the same is being accomplished by applying artificial intelligence techniques. However, most of these works are limited to predicting 1-, 2-, or 10-year survivability with several of these works simulating data for balancing the dataset. Hence, there is a need for fine-grained prognosis without tempering the data. To achieve the same, we employ data from Surveillance, Epidemiology, and End Results (SEER) along with an ensemble of classification and regression models to develop a fine-grained model to predict the survival period of glioblastoma patients. The proposed framework titled 'Peshnaja' presents higher resolution in the prognosis of glioblastoma while showcasing an accuracy of 70% with an overall RMSE of 2.65. Moreover, a comparison of Peshnaja with other frameworks shows that we did not impute missing values nor employed synthetic data to force good results, thereby keeping Peshnaja true to the existing data.
Online Proctoring System: A Client Side Approach Using Deep Learning
Devesh

Devesh Bedmutha

and 4 more

January 15, 2024
An AI based Online Proctoring System isn't a new concept and many such capable exam portals do already exist. However, all of them have an unsolved design flaw which is server side processing.To detect any suspicious activity, the sites either take a snapshot of the examinee in regular intervals which is doable but is very weak, or they continuously send the video feed over to the server for processing which being comparatively more effective, is highly expensive. Sending video feeds of tens of thousands of students and processing them in real time can be very heavy on the server as well as costly for the client. To counter all these flaws, proposing an AI based proctoring system that securely works on the client side. Overall goal is to allow the face detection system and suspicious activity detection system to run on the client side which will significantly reduce the server load and dependency on the network. In this review paper we explored various algorithms for face verification, object detection, also reviewed pre-existing OPS systems and learned about their architecture.
Graph Contrastive Learning for Anomaly Detection and Personalized Alerting in Sensor-...
Nivedita Bijlani

Nivedita Bijlani

and 5 more

January 16, 2024
Sensor-based remote health monitoring can be used for prompt detection of adverse health in people living with dementia in the home. Current anomaly detection approaches are challenged by noisy data, unreliable event annotation and wide variability in home settings. We hypothesized that a downturn in health would present as a discernible shift in spatiotemporal patterns, which can be identified by monitoring the temporal evolution of the household movement graph. We present a lightweight contrastive learning approach to detect adverse events using home activity changes, along with household-personalized alerting thresholds based on the clinician-set target alert rate. Our self-supervised Graph Barlow Twins model with aggregation-based node feature masking is used to generate daily activity representations in participant households taken from a real-world dataset collected by the UK Dementia Research Institute. Daily graph differences represent the anomaly score, which are compared to the householdpersonalized threshold, and alerts raised to the clinical monitoring team. Attention weights from the graph encoder support explainability and help focus on the source of anomaly. Our model outperforms state-of-the-art temporal graph algorithms in detecting agitation and fall events for three distinct patient cohorts, with 81% average recall and 88% generalizability at a target alert rate of 7%. To the best of our knowledge, we offer the first use case of negative sample-free graph contrastive learning for anomaly detection in a healthcare setting that is domain-agnostic and can be applied to wider IoT settings.  
Reinforcement Learning With Large Language Models (LLMs) Interaction For Network Serv...
Hongyang Du

Hongyang Du

and 5 more

January 10, 2024
Artificial Intelligence-Generated Content (AIGC)- related network services, especially image generation-based services, have garnered notable attention due to their ability to cater to diverse user preferences, which significantly impacts the subjective Quality of Experience (QoE). Specifically, different users can perceive the same semantically informed image quite differently, leading to varying levels of satisfaction. To address this challenge and maximize network users’ subjective QoE, we introduce a novel interactive artificial intelligence (IAI) approach using Reinforcement Learning With Large Language Models Interaction (RLLI). RLLI leverages Large Language Model (LLM)-empowered generative agents to simulate user interactions, thereby providing real-time feedback on QoE that encapsulates a range of user personalities. This feedback is instrumental in facilitating the selection of the most suitable AIGC network service provider for each user, ensuring an optimized, personalized experience.
Model predictive path-following framework for generalised N-trailer vehicles in the p...
Nestor Nahuel Deniz

Nestor Nahuel Deniz

and 2 more

January 10, 2024
Effective obstacle detection and avoidance play pivotal roles in the implementation of autonomous navigation systems. While numerous authors have addressed obstacle avoidance for single unicycles and car-like vehicles, this work extends the scope to encompass generalised N-trailer vehicles, consisting of a single active segment pulling an arbitrary number of trailers. In contrast to treating obstacles as hard constraints or barrier functions, we introduce a unique approach by modelling them as soft constraints. Gaussian functions are seamlessly integrated into the objective function of the model predictive controller, preserving the convexity of the search space and significantly alleviating computational demands. Although this strategy allows regions occupied by obstacles to remain viable for navigation, we counteract this by thoughtfully designing the amplitude of the Gaussian function. This design is influenced by various components within the formulation, discouraging navigation through obstacle-occupied spaces. The effectiveness of this approach is substantiated through a series of simulated and field experiments involving a tractor pulling two trailers. These experiments showcase the method’s proficiency in navigating around obstacles while maintaining computational efficiency, thereby affirming its practical viability in real-world scenarios.
Study on Neural Network Development Tools for Web Applications and an Attempt to Adva...
Majdi Awad

Majdi Awad

January 26, 2024
The integration of artificial neural networks (ANNs) into web applications has transformed the landscape of technological advancements, allowing for sophisticated functionalities in various domains. This paper constitutes a comprehensive study that delves into the significance of programming languages and tools in enabling the implementation of ANNs within web-based frameworks. Specifically, the discussion centers around PHPNeuroForge, a powerful library built within the PHP ecosystem, tailored to facilitate the development and deployment of ANNs. With an ever-growing demand for web applications to harness the potential of machine learning and AI, the choice of programming language and framework becomes a pivotal factor in ensuring efficiency, scalability, and performance. This study aims to explore the capabilities of PHPNeuroForge in enhancing web applications by seamlessly integrating ANNs, thereby contributing to the evolution of intelligent and responsive systems on the web. PHPNeuroForge emerges as a robust toolset that enables developers to create neural networks with ease, leveraging the flexibility and familiarity of the PHP language. The library provides a comprehensive suite of functionalities, ranging from constructing neural network architectures to training models, conducting predictions, and handling complex computations. Its intuitive design and extensive documentation empower developers to build ANNs tailored to specific application needs. Moreover, this paper conducts an in-depth comparative analysis between PHPNeuroForge and other popular programming languages for neural network development, such as Python and C++. Through benchmarks, case studies, and performance evaluations, the study elucidates the strengths and limitations of PHPNeuroForge concerning speed, accuracy, scalability, and usability in web applications. The investigation delves into PHPNeuroForge's adaptability in diverse scenarios, showcasing its potential across domains such as finance, healthcare, e-commerce, and natural language processing. Additionally, the paper highlights the flexibility of PHPNeuroForge in handling various ANN architectures, including feedforward networks, convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their applications in image recognition, sentiment analysis, and recommendation systems. Furthermore, the study discusses the community support, maintenance, and future prospects of PHPNeuroForge, emphasizing its role in democratizing AI for web development. The aim is to provide developers, researchers, and tech enthusiasts with insights into harnessing the capabilities of PHPNeuroForge to build intelligent and responsive web applications seamlessly integrated with artificial neural networks. In conclusion, PHPNeuroForge emerges as a promising avenue for developers seeking to infuse AI capabilities into their web applications, offering a viable and accessible platform within the PHP ecosystem. The study aims to contribute to the discourse on programming languages and tools for ANNs, presenting PHPNeuroForge as a compelling solution driving the advancement of intelligent web technologies.
Benchmarking Bridge Aggregators
Shankar Subramanian
André Augusto

Shankar Subramanian

and 4 more

February 27, 2024
Blockchain aggregators play an instrumental role in the evolution of blockchain technology, serving as pivotal enablers of interoperability, efficiency, and user accessibility in an increasingly decentralized digital world. However, the literature on this emerging technology is scarce and not systematized, making it harder for practitioners and researchers to understand the field. In this paper, we systematize blockchain aggregators, with a specific emphasis on bridge aggregators. We present an exhaustive analysis of a diverse array of token and message aggregators, each distinguished by its unique architecture. Our investigation delves into critical aspects of these aggregators, encompassing their functionality, security measures, pricing models, and latency characteristics. The objective of this research is to furnish readers-encompassing both users and developers-with insightful and actionable information, thereby facilitating informed navigation through the complex landscape of blockchain aggregators.
Variational Estimation of Optimal Signal States for Quantum Channels
leonardo Oleynik

Leonardo Oleynik

and 3 more

January 18, 2024
This paper explores the performance of quantum communication systems in the presence of noise and focuses on finding the optimal encoding for maximizing the classical communication rate, approaching the classical capacity in some scenarios. Instead of theoretically bounding the ultimate capacity of the channel, we adopt a signal-processing perspective to estimate the achievable performance of a physically available but otherwise unknown quantum channel. By employing a variational algorithm to estimate the trace distance between quantum states, we numerically determine the optimal encoding protocol for the amplitude damping and Pauli channels. Our simulations demonstrate the convergence and accuracy of the method with a few iterations, confirming that optimal conditions for binary quantum communication systems can be variationally determined with minimal computation. Further, since the channel knowledge is not required at the transmitter or at the receiver, these results can be employed in arbitrary quantum communication systems, including satellite-based communication systems, a particularly relevant platform for the quantum Internet.
Seeking Optimal and Explainable Deep Learning Models for Inertial-based Posture Recog...
Diogo R Martins

Diogo R Martins

and 2 more

January 10, 2024
Deep Learning (DL) models, widely used in several domains, are currently often used for posture recognition. This work researches four DL architectures for posture recognition: Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), hybrid CNN-LSTM, and transformer. Agriculture and construction working postures were addressed as use cases, by acquiring an inertial dataset during the simulation of their typical tasks in circuits. Since model performance greatly depends on the choice of the hyperparameters, a grid search was conducted to find the optimal hyperparameters. An extensive analysis of the hyperparameter combinations' effects is presented, identifying some general tendencies. Moreover, to unveil the black-box DL models, we applied the Gradient-weighted Class Activation Mapping (Grad-CAM) explainability method on CNN's outputs to better understand the model's decision-making, in terms of the most important sensors and time steps for each window output. An innovative combination of CNN and LSTM was implemented for the hybrid architecture, by using the convolution feature maps as LSTM inputs and fusing both subnetworks' outputs with weights, which are learned during the training. All architectures were successful in recognizing the eight posture classes, with the best model of each architecture exceeding 91.5% F1-score in the test. A top F1-score of 94.31%, with an inference time of just 2.96 ms, was achieved by a hybrid CNN-LSTM.
Computation Overhead Optimization Strategy and Implementation for Dual-Domain Sparse-...
Zihan Deng

Zihan Deng

and 4 more

January 10, 2024
Sparse-view computed tomography (CT) significantly reduces radiation doses to the human body, whereas its analytical reconstruction exhibits severe streak artifacts. Recently, deep learning methods have shown exciting effects in CT reconstruction. The Dual-Domain (DuDo) deep learning method is one of the representative methods, and it can process the information in both the sinogram and image domains. However, the existing DuDo methods do not pay enough attention to the allocation of training costs and strategies for the two domains. In this paper, we propose a Computation-Overhead Optimization (COO) DuDo training strategy for sparse-view CT reconstruction, i.e., COO-DuDo. The training ratio of different domains is controlled by calculating their computation overhead, loss, and gradient variation of the loss. To make our COO-DuDo strategy enable sparse-view CT reconstruction better, we adopt a DuDo-Network (COO-DDNet) structure based on two coding-decoding-type subnetworks. As specific contributions, we design a Multilevel Cross-domain Connection (MCC) method to connect the decoding layers of the same scale in the two subnetworks and adopt the two-channel method for upsampling, which enables more fine-grained control of model updates and suppresses checkerboard artifacts. The evaluation results validate the effectiveness of our training strategy and methods. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the reconstruction results increased by 38.8% and 0.37%, respectively, and the model convergence time decreased by 11.8%. Our research provides a broader perspective for dual-domain image restoration tasks from the perspective of computational overhead.
Intelligent Integrated System for Future Crop Recommendation: Advancing Sustainable A...
Jorge Felix Martínez  Pazos

Jorge Felix Martínez Pazos

and 3 more

January 10, 2024
The rise of global warming and climate change poses a potential threat to the agricultural industry. In response to this challenge, this research introduces an intelligent integrated system for recommending future crops using climate forecasting and crop recommendation models. The goal is to improve efficiency and productivity within the Cuban sector. By employing a weighted classifier with the Relative Model Accuracy Equation as the weight distribution, the classification models were improved, achieving 99.8% for each performance metric (precision, recall, and f1 score) evaluated over the test set. The main contribution of this study is an integrated intelligent system that leverages supervised machine learning techniques to predict the optimal crop for a given soil in a specific state of Cuba during a particular year among 22 possible crops. The system is built as a python module to allow its integration into future software solutions and is released under the MIT open-source license.
Testing a rich sample of cybercrimes dataset  by using powerful classifiers’ competen...
Elrasheed Ismail Mohommoud ZAYID

Elrasheed Ismail Mohommoud ZAYID

and 2 more

January 10, 2024
Key goal for this study was to conduct a real network traffic sample dataset and did a deep mining to survey for secure the Saudi community by report how the Saudi cyberspace’s pattern is. A kind of a heterogenous simultaneous optical multiprocessor exchange bus architecture used as a backbone network for collecting the network traffic. First, crucial cleaning processes were performed to clean the very noisy and dirty dataset. A total of 1048575 datapoints and 22 features were considered for the model/data evaluation processes. Second, Lazy predict mechanism was recruited to nominate the top-ranking learning models candidates. Third, a powerful supervised computation algorithms used to shape and picture the terra-Byte payload traffic across the Saudi cyber domain. Finally, for choosing the best Saudi cybercrime classification model, an intense digging processes were experimented and analyzed. Performance metrics used are accuracy (Acc), balanced accuracy (BAcc), F1-score, learning time taken, and confusion matrix. Evaluating the performance of different models based on “Destination” as target decision tree classifier (DTC) was the first model (i.e., highest BAcc with low time taken) and Saudi Arabia was the 9th country as a generated source target.  
FRAUD TRANSACTION DETECTION FOR ANTI-MONEY LAUNDERING SYSTEMS BASED ON DEEP LEARNING
Jorge Felix Martínez  Pazos

Jorge Felix Martínez Pazos

and 4 more

January 10, 2024
This study addresses the escalating problem of financial fraud, with a particular focus on credit card fraud, a phenomenon that has skyrocketed due to the increasing prevalence of online transactions. The research aims to strengthen anti-money laundering (AML) systems, thereby improving the detection and prevention of fraudulent transactions. For this study, a Dense Neural Network (DNN) has been developed to predict fraudulent transactions with high accuracy. The model is based on deep learning, and given the highly unbalanced nature of the dataset, balancing techniques were employed to mitigate the bias towards the minority class and improve performance. The DNN model demonstrated robust performance, generalizability, and reliability, achieving over 99% accuracy across training, validation, and test sets. This indicates the model's potential as a powerful tool in the ongoing fight against financial fraud. The results of this study could have significant implications for the financial sector, corporations, and governments, contributing to safer and more secure financial transactions.
FRESHNETS: HIGHLY ACCURATE AND EFFICIENT FOOD FRESHNESS ASSESSMENT BASED ON DEEP CONV...
Jorge Felix Martínez  Pazos

Jorge Felix Martínez Pazos

and 3 more

January 10, 2024
Food freshness classification is a growing concern in the food industry, mainly to protect consumer health and prevent illness and poisoning from consuming spoiled food. Intending to take a significant step towards improving food safety and quality control measures in the industry, this study presents two models based on deep learning for the classification of fruit and vegetable freshness: a robust model and an efficient model. Models' performance evaluation shows remarkable results; in terms of accuracy, the robust model and the efficient model achieve 97.6% and 94.0% respectively, while in terms of AUC score, both models achieve more than 99%, with the difference in inference time between each model over 844 images being 14 seconds.
Enhancing Brain Tumor Diagnosis using CNN    Models: A Comparative Analysis
Prathamesh Dinesh Joshi

Prathamesh Dinesh Joshi

January 10, 2024
Brain tumors, characterized by the emergence of abnormal cell growths within or around the brain, stand as a significant medical challenge with the potential for grave consequences. Regardless of their categorization as benign or malignant, the imperative for swift diagnosis and treatment remains paramount. This research explores the integration of pretrained deep learning models, particularly Convolutional Neural Networks (CNNs) including VGG16, InceptionV3, ResNet50, and NasNetMobile in automating the diagnosis process using MRI scans for the ease of patient and Healthcare Providers. This approach leverages transfer learning and Computer-Aided Diagnosis (CAD) to streamline the detection process. Hyperparameter tuning is integrated to optimize pretrained model parameters encompassing factors such as optimizer choices, activation functions, number of neurons in each dense layer and learning rates. By systematically fine tuning the hyperparameters remarkable enhancements in tumor classification accuracy are demonstrated. This research emphasizes the significance of customized hyperparameter optimization for pretrained models, advancing the accuracy and efficiency of brain tumor detection.
Improved Brain Effective Connectivity Modeling by Dynamic Bayesian Networks
ilkay ulusoy

Ilkay Ulusoy

and 1 more

January 18, 2024
Background: If causal interactions between brain regions, i.e. effective connectivity modelling, could be accurately achieved, early diagnosis of many neurodegenerative diseases would be possible. In many recent comparative studies based on simulations, it has been observed that Bayesian network-based methods are more successful than others due to their non-linear and non-deterministic nature. New method: Although Dynamic Bayesian networks (DBNs) are much more suitable for effective connectivity modelling in the brain because they can inherently model temporal information and cyclic behavior, they have not been tested in comparative studies due to computational complexity problems in structure learning. In this study, reliable modelling by DBNs is achieved, which will accelerate scientific developments in neuroscience. Solutions to the computational problems that are believed to exist for this promising modelling method are proposed. Result: It is shown that discrete DBN structure modelling, which is a data-driven approach, converges to the globally correct network structure when trained with the appropriate data size and the imaginary data size, which are much smaller than the theoretically appropriate amount of data. The quantization method is also very important for convergence. The Hill Climbing (HC) search method is shown to converge to the true structure at a reasonable iteration step size when using the appropriate data and imaginary data sizes. Comparison with existing methods: The method (Improved-dDBN) is applied to commonly used simulation data and it is shown that better and more consistent performance is obtained compared to existing methods in the literature, for realistic scenarios such as varying graph complexity, various input conditions, mixed signal and noise cases and non-stationary connection conditions. Conclusions: Since Improved-dDBN performed the best for all scenarios of the simulated dataset, it is a good candidate for use on real datasets for effective brain connectivity modelling. The sample size of the dataset is very important for convergence and should therefore be checked. Also, the imaginary size should be appropriate for the available sample size. For these computational constraints, the approach proposed in this study can be easily applied.
← Previous 1 2 3 4 5 6 7 8 9 10 11 … 132 133 Next →
Back to search
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy