AUTHOREA
Log in Sign Up Browse Preprints
LOG IN SIGN UP

1857 signal processing and analysis Preprints

Related keywords
signal processing and analysis link-level simulator rate-distortion optimization movement analysis Multiview Ocean 6G networks esp8266 graph convolutional network Facial feature extraction Jointly typical decoding lorawan digital subtraction angiography pso algorithm evolutionary algorithms learned compression graph scalability MIMO Synonymous typical set OFDM waveform numerology network occlusion sensitivity analysis nonlinear system image restoration machine learning + show more keywords
Electroneurogram signal (ENG) convolutional neural network contrastive learning Time Varying 𝑀 motor commands decoding neural architecture search Classification and Segmentation manifold deepfake ocean waves sensing unmanned aerial vehicles bidirectional neural prosthesis Semantic rate distortion function fields, waves and electromagnetics home automation anomaly detection genetic algorithm Semantic source channel coding interactive loss function gait speed engineering profession mri Differentiable Architecture Search biomimetic pre-emptive mapping neuromorphic computing directional wave spectra marine X-band radar modulation transfer function (MTF) directional MTF 2d-MTF shadowing detection Point Cloud robotics and control systems Cerebrovascular diseases geoscience Adaptive Rotation Synonymous length Maximum likelihood group decoding human-machine interaction TDD wearable sensors Large AI Models Generation imputation photonics and electrooptics Semantic channel capacity rotating machinery Jointly typical encoding facial expression recognition age-specific pattern Image/video/graphics coding surface waves spectra PSNR system identification power, energy and industry applications kernel computer vision Brain age low-cost observing systems large language models core temperature 1 queueing system bioengineering Extended Kalman Filter urllc wireless control UAV-enabled relaying lora MEC systems mild traumatic brain injury computational efficiency computational intelligence Up/Down semantic mutual information high dynamic range decoupling training strategy Synonymous mapping Non-Ideal System M-ALD timeon-air Semantically jointly typical set 𝐸 𝑘 aerospace Semantic relative entropy Media Forensics fault detection human-robot interaction Semantic entropy Automated Design Optimization computing and processing Semantic distortion diffusion models Semantically typical set regression perceptual distortion measures intention inference Transfer learning sap head-worn IMU Type-2 Fuzzy Set appearance uncertainties differentiable proxy free-living optical sensing nonparametric components, circuits, devices and systems super-resolution real-time estimation Gaze estimation multispectral local connectivity THz ISAC blind super resolution generative decoupling network communication, networking and broadcast technologies deep learning spreading factor (SF) relays general topics for engineers transportation Gait analysis oceanography
FOLLOW
  • Email alerts
  • RSS feed
Please note: These are preprints and have not been peer reviewed. Data may be preliminary.
Transfer Learning for Anomaly Detection in Rotating Machinery using Data-driven Key O...
Jia Liang

Jia Liang

and 4 more

February 27, 2024
Anomaly detection is an important task in industrial applications. However, designing an accurate anomaly detector can be very challenging in settings where anomalous labels are sparse or, in the worst case, missing in the training data. To mitigate this issue of a lack of anomalous labels in the domain of interest, existing approaches use transfer learning, leveraging information from anomalous samples in a closely related domain. Although previous studies have shown good results from applying transfer learning, they do not specifically address the issue of high false-positive rates, especially in industrial settings. High false-positive rates can arise from misleading information present in uninformative features. Inspired by this observation, the paper focuses on identifying key input features—termed as such due to their strong predictability in anomaly detection. A transfer learning approach is introduced that leverages the optimal \(f_{\beta}\) score for key feature estimation. This approach involves a weight vector to amplify key features and attenuate uninformative inputs during prediction. We demonstrate the capabilities of our proposed method through an industrial application: anomaly detection for rotating machinery. Based on our findings, anomaly detection algorithms that utilize data-driven features obtained through the proposed method outperform detectors based on features identified by domain experts. More importantly, our proposed framework can work with any downstream unsupervised anomaly detection algorithm, allowing us to freely choose the best algorithm for the anomaly detection task.
Solving the unsolvable non-stationary 𝑴/𝑬 𝒌 /𝟏 queue's state variable open problem
Dr Ismail A Mageed

Dr Ismail A Mageed

February 14, 2024
This paper is a continuation on my revolutionary theory of solving the pointwise fluid flow approximation model for time-varying queues. Thus, the long-standing simulative approach has now been replaced by an exact solution by using a constant ratio 𝛽 (Ismail's ratio) , offering an exact analytical solution. The stability dynamics of the time-varying 𝑀/𝐸 𝑘 /1 queueing system are then examined numerically in relation to time, 𝛽, and the queueing parameters.
Evolution and Efficiency in Neural Architecture Search: Bridging the Gap Between Expe...
Fanfei Meng

Fanfei Meng

and 2 more

February 14, 2024
The paper provides a comprehensive overview of Neural Architecture Search (NAS), emphasizing its evolution from manual design to automated, computationally-driven approaches. It covers the inception and growth of NAS, highlighting its application across various domains, including medical imaging and natural language processing. The document details the shift from expert-driven design to algorithm-driven processes, exploring initial methodologies like reinforcement learning and evolutionary algorithms. It also discusses the challenges of computational demands and the emergence of efficient NAS methodologies, such as Differentiable Architecture Search and hardware-aware NAS. The paper further elaborates on NAS's application in computer vision, NLP, and beyond, demonstrating its versatility and potential for optimizing neural network architectures across different tasks. Future directions and challenges, including computational efficiency and the integration with emerging AI domains, are addressed, showcasing NAS's dynamic nature and its continued evolution towards more sophisticated and efficient architecture search methods.
Exploring the Potential of ESP8266: A Wireless Control Experiment
Paulo Ricardo

Paulo Ricardo

and 2 more

February 14, 2024
This paper details an experiment utilizing ESP8266 modules as servers to wirelessly control diverse electrical appliances in home automation. The experiment showcased the modules' capability to respond to commands via a web interface on both mobile and desktop platforms or even tablets. While most of the experiment ran smoothly, occasional freezing and connectivity disruptions were observed. The abstract encapsulates the experiment's successes, discusses encountered challenges, and outlines a forward-looking perspective, including the integration of a custom PCB for enhanced system stability.
Improved estimation of the directional wave spectrum from marine radar images by empl...
Susanne Støle-Hentschel

Susanne Støle-Hentschel

and 4 more

February 26, 2024
A document by Susanne Støle-Hentschel. Click on the document to view its contents.
Facilitating URLLC vis-à-vis UAV-enabled relaying for MEC Systems in 6G Networks
Ali Ranjha

Ali Ranjha

and 3 more

February 12, 2024
The futuristic sixth-generation (6G) networks will empower ultra-reliable and low latency communications (URLLC), enabling a wide array of mission-critical applications such as mobile edge computing (MEC) systems, which are largely unsupported by fixed communication infrastructure. To remedy this issue, unmanned aerial vehicle (UAV) has recently come to the limelight to facilitate MEC for internet of things (IoT) devices as they provide desirable line-of-sight (LoS) communications compared to fixed terrestrial networks, thanks to their added flexibility and three-dimensional (3D) positioning. In this paper, we consider UAV-enabled relaying for MEC systems for uplink transmissions in 6G networks, and we aim to optimize mission completion time subject to the constraints of resource allocation, including UAV transmit power, UAV CPU frequency, decoding error rate, blocklength, communication bandwidth, and task partitioning as well as 3D UAV positioning. Moreover, to solve the non-convex optimization problem, we propose three different algorithms, including successive convex approximations (SCA), altered genetic algorithm (AGA) and smart exhaustive search (SES). Thereafter, based on time-complexity, execution time, and convergence analysis, we select AGA to solve the given optimization problem. Simulation results demonstrate that the proposed algorithm can successfully minimize the mission completion time, perform power allocation at the UAV side to mitigate information leakage and eavesdropping as well as map a 3D UAV positioning, yielding better results compared to the fixed benchmark sub-methods. Lastly, subject to 3D UAV positioning, AGA can also effectively reduce the decoding error rate for supporting URLLC services.
Adaptive Multiview Graph Convolutional Network for 3D Point Cloud Classification and...
Wanhao Niu

Wanhao Niu

and 2 more

February 12, 2024
Point cloud classification and segmentation are crucial tasks for point cloud processing and have wide range of applications, such as autonomous driving and robot grasping. Some pioneering methods, including PointNet, VoxNet, DGCNN, etc., have made substantial advancements. However, most of these methods don't consider the large-distance geometric relationships among points in different perspectives within the point cloud, which limits the features extraction and leads to the accuracy of classification and segmentation cannot be further improved. To address this issue, we propose an adaptive multiview graph convolutional network (AM-GCN), which comprehensively synthesizes both the global geometric features of the point cloud and the local features within the projection planes of multiple views through an adaptive graph construction method. First, an adaptive rotation module in AM-GCN is proposed to predict a more favorable angle of view for projection. Then, a multi-level feature extraction network can flexibly be constructed by spatial-based or spectral-based graph convolution layers. Finally, AM-GCN is evaluated on ModelNet40 for classification, ShapeNetPart for part segmentation, ScanNetv2 and S3DIS for scene segmentation, which demonstrates the robustness of the AM-GCN with competitive performance compared with existing methods. It's worth noting that it performs state-of-the-art performance in many categories.
Multilinear Kernel Regression and Imputation via Manifold Learning
Duc Thien Nguyen
Konstantinos Slavakis

Duc Thien Nguyen

and 1 more

February 12, 2024
This paper introduces a novel nonparametric framework for data imputation, coined multilinear kernel regression and imputation via the manifold assumption (MultiL-KRIM). Motivated by manifold learning, MultiL-KRIM models data features as a point cloud located in or close to a user-unknown smooth manifold embedded in a reproducing kernel Hilbert space. Unlike typical manifold-learning routes, which seek low-dimensional patterns via regularizers based on graph-Laplacian matrices, MultiL-KRIM builds instead on the intuitive concept of tangent spaces to manifolds and incorporates collaboration among point-cloud neighbors (regressors) directly into the data modeling term of the loss function. Multiple kernel functions are allowed to offer robustness and rich approximation properties, while multiple matrix factors offer low-rank modeling, integrate dimensionality reduction, and streamline computations with no need of training data. Two important application domains showcase the functionality of MultiL-KRIM: time-varying graph-signal (TVGS) recovery, and reconstruction of highly accelerated dynamic-magnetic-resonance-imaging (dMRI) data. Extensive numerical tests on real and synthetic data demonstrate MultiL-KRIM's remarkable speedups over its predecessors, and outperformance over prevalent "shallow" data-imputation techniques, with a more intuitive and explainable pipeline than deep-image-prior methods.
GDN: Generative Decoupling Network for Digital Subtraction Angiography Generation
Ruibo Liu

Ruibo Liu

and 10 more

February 12, 2024
Objective: Digital subtraction angiography (DSA) is significantly important for cerebrovascular disease diagnosis and treatment. However, artifacts and noise are inevitable and reduce image quality. These problems could make clinical diagnosis difficult. In this paper, we introduce a novel deep learning architecture, exploiting the information decoupling training strategy to generate highquality DSA images. Methods: We propose the generative decoupling network, a feature decoupling convolutional network, which maximizes the difference between different structures throughout a decoupling training strategy. In this network, an axial residual block and a learnable sampling method are proposed to enhance the strength of feature extraction. Results: The results showed that our proposed method significantly outperforms the existing methods in the DSA generation task. Furthermore, we quantified the method using the metrics of SSIM, PSNR, VSI, FID and FSIM, with the results of 93.57%, 24.18dB, 98.04%, 351.59, and 89.95%, respectively. Conclusion: Our method can produce high-quality DSA images with little or even no artifact and noise. Significance: The proposed method can effectively reduce artifacts and noise, and generate high quality DSA images with complete and clear vascular structures.
SLYKLatent, a Learning Framework for efficient Facial Features Estimation
Samuel Adebayo

Samuel Adebayo

and 2 more

February 12, 2024
In this research, we present SLYKLatent, a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets due to aleatoric uncertainties, covariant shifts, and test domain generalization. SLYKLatent utilizes Self-Supervised Learning for initial training with facial expression datasets, followed by refinement with a patch-based tri-branch network and an inverse explained variance weighted training loss function. Our evaluation on benchmark datasets achieves an 8.7% improvement on Gaze360, rivals top MPI-IFaceGaze results, and leads on a subset of ETH-XGaze by 13%, surpassing existing methods by significant margins. Additionally, adaptability tests on RAF-DB and Affectnet show 86.4% and 60.9% accuracies, respectively. Ablation studies confirm the effectiveness of SLYKLatent's novel components. This approach has strong potential in human-robot interaction.
A Mathematical Theory of Semantic Communication
niukai

Kai Niu

and 1 more

February 12, 2024
The year 1948 witnessed the historic moment of the birth of classic information theory (CIT). Guided by CIT, modern communication techniques have approached the theoretic limitations, such as, entropy function H(U), channel capacity C = max p(x) I(X; Y) and rate-distortion function R(D) = min p(x|x):Ed(x,x)≤D I(X; X). Semantic communication paves a new direction for future communication techniques whereas the guided theory is missed. In this paper, we try to establish a systematic framework of semantic information theory (SIT). We investigate the behavior of semantic communication and find that synonym is the basic feature so we define the synonymous mapping between semantic information and syntactic information. Stemming from this core concept, synonymous mapping, we introduce the measures of semantic information, such as semantic entropy H s (Ũ), up/down semantic mutual information I s (X; Ỹ) (I s (X; Ỹ)), semantic capacity C s = max p(x) I s (X; Ỹ), and semantic rate-distortion function R s (D) = min p(x|x):Eds(x, x)≤D I s (X; X). Furthermore, we prove three coding theorems of SIT by using random coding and (jointly) typical decoding/encoding, that is, the semantic source coding theorem, semantic channel coding theorem, and semantic rate-distortion coding theorem. We find that the limits of SIT are extended by using synonymous mapping, that is, H s (Ũ) ≤ H(U), C s ≥ C and R s (D) ≤ R(D). All these works composite the basis of semantic information theory. In addition, we discuss the semantic information measures in the continuous case. Especially, for band-limited Gaussian channel, we obtain a new channel capacity formula, C s = B log S 4 1 + P N0B with the synonymous length S. In summary, the theoretic framework of SIT proposed in this paper is a natural extension of CIT and may reveal great performance potential for future communication.
Two Heads Better than One: Dual Degradation Representation for Blind Super-Resolution

Hsuan Yuan

and 7 more

February 06, 2024
Previous methods have demonstrated remarkable performance in single image super-resolution (SISR) tasks with known and fixed degradation (e.g., bicubic downsampling). However, when the actual degradation deviates from these assumptions, these methods may experience significant declines in performance. In this paper, we propose a Dual Branch Degradation Extractor Network to address the blind SR problem. While some BlindSR methods assume noisefree degradation and others do not explicitly consider the presence of noise in the degradation model, our approach predicts two unsupervised degradation embeddings that represent blurry and noisy information, respectively. The SR network can then be adapted to blur embedding and noise embedding in distinct ways. Furthermore, we treat the degradation extractor as a regularizer to capitalize on the differences between SR and HR images. Extensive experiments on several benchmarks demonstrate that our method achieves SOTA performance in the blind SR problem.
Detecting Multimedia Generated by Large AI Models: A Survey
Li Lin

Li Lin

and 9 more

February 06, 2024
The rapid advancement of Large AI Models (LAIMs), particularly diffusion models and large language models, has marked a new era where AI-generated multimedia is increasingly integrated into various aspects of daily life. Although beneficial in numerous fields, this content presents significant risks, including potential misuse, societal disruptions, and ethical concerns. Consequently, detecting multimedia generated by LAIMs has become crucial, with a marked rise in related research. Despite this, there remains a notable gap in systematic surveys that focus specifically on detecting LAIM-generated multimedia. Addressing this, we provide the first survey to comprehensively cover existing research on detecting multimedia (such as text, images, videos, audio, and multimodal content) created by LAIMs. Specifically, we introduce a novel taxonomy for detection methods, categorized by media modality, and aligned with two perspectives: pure detection (aiming to enhance detection performance) and beyond detection (adding attributes like generalizability, robustness, and interpretability to detectors). Additionally, we have presented a brief overview of generation mechanisms, public datasets, and online detection tools to provide a valuable resource for researchers and practitioners in this field. Furthermore, we identify current challenges in detection and propose directions for future research that address unexplored, ongoing, and emerging issues in detecting multimedia generated by LAIMs. Our aim for this survey is to fill an academic gap and contribute to global AI security efforts, helping to ensure the integrity of information in the digital realm. The project link is https://github.com/Purdue-M2/Detect-LAIM-generated-Multimedia-Survey.
Analysis of Sinusoid Plus Random Telegraph Noise in Nonlinear Devices: Series Chara...
Cameron Pike

Cameron Pike

February 06, 2024
Random telegraph noise (RTN) is a common phenomenon in semiconductors, and it is often desired to quantify the effect of the observed RTN on circuits. In this paper we show that the series form of the characteristic function method of nonlinear analysis is suitable for modeling and predicting the output correlation and spectrum of a combination of signals and RTN. We derive the general characteristic function for RTN defined by a single transition parameter, and its spectrum identified as Lorentzian (Cauchy), proportional to the reciprocal of the square of frequency. We then show how the output spectrum is a weighted sum of the respective spectra of the input sinusoid and RTN. Using a simple large-signal model of a MOSFET amplifier, we compute the contributions to the output spectrum as SNR and sinusoid amplitude are varied, and present numerical results. The procedure is easily adapted for other signals and nonlinear circuits.
A Decentralized Dynamic Relaying-Based Framework for Enhancing LoRa Networks Performa...
Hamza Haif

Hamza Haif

and 2 more

January 30, 2024
Long-Range (LoRa) technology holds tremendous potential for regulating and coordinating communication among Internet-of-Things (IoT) devices due to its low power consumption and cost-effectiveness. However, LoRa faces significant obstacles such as reduction in coverage area, a high packet drop ratio (PDR), and an increased likelihood of collisions, all of which result in substandard data rates. In this paper, we present a novel approach that employs a relaying node capable of allocating resources dynamically based on signal parameters. In particular, the geometric placement of the relay node is determined by a genetic algorithm that maximizes signal-tonoise ratio (SNR) and signal-to-interference ratio (SIR) success probabilities. Using equal-area based (EAB) spreading factor (SF) distance allocation scheme, the coverage area is sliced into distinct regions in order to derive the success probabilities for different communication stages. Furthermore, we present a frequency channel shuffling algorithm to prevent collisions between end devices (EDs) without increasing the complexity of the relaying nodes. Through extensive simulations, we demonstrate that our proposed scheme effectively expands the coverage area, conserves transmission resources, and enhances the system's throughput. Specifically, our approach extends the range by up to 40%, increases the throughput by up to 50% compared to conventional methods, and achieves a 40% increase in success probability. To validate the practicality of our approach, we implement our algorithm in an active LoRa network utilizing an ESP32 LoRa SX1276 module, showcasing its compatibility in real-world scenarios.
Neuromorphic Event-based Processing of Transradial Intraneural Recording for Online G...
Farah Baracat

Farah Baracat

and 4 more

February 06, 2024
A document by Farah Baracat. Click on the document to view its contents.
Retrieval of Surface Waves Spectrum from UAV Nadir Video
Aviv Solodoch

Aviv Solodoch

and 3 more

February 21, 2024
Sea surface wave spectrum measurements are necessary for a host of basic research questions as well as for engineering and societal needs. However, most measurement techniques require great investment in infrastructure and time-intensive deployment techniques. We propose a new approach of wave measurement from standard video footage recorded by low-cost Unmanned Aerial Vehicles (UAV). We address UAV nadir imagery, which are particularly simple to obtain operationally. The method relies on the fact that optical contrast of surface gravity waves is proportional to their steepness. We present a robust methodology of regularized inversion of the optical imagery spectra, resulting in retrieval of the three-dimensional wavenumber-frequency sea surface height spectrum. The system was tested in several sea trials and in different bathymetric depths and sea state conditions. The resulting wave bulk parameters and spectral characteristics are in good agreement with collocated measurements from wave buoys and bottom-mounted acoustic sensors. Simple deployment, mobility, and flexibility in spatial coverage show a great potential of UAVs to significantly enhance the availability of wave measurements.
Brain Age Prediction Using Interpretable Multifeature-based Convolutional Neural Netw...
Lijun Bai

Lijun Bai

and 11 more

February 06, 2024
Convolutional neural network (CNN) can predict chronological age accurately based on MRI. However, most studies use single feature to predict brain age in healthy individuals, ignoring the adding information of multiple sources. Here, we developed an interpretable 3D CNN model to predict brain age based on a large, heterogeneous dataset (N = 1464). Comparing with state-ofthe-art methods, our prediction framework has the following improvements. First, our model utilized multiple 3D features derived from T1w data as inputs, and reduced the mean absolute error (MAE) of age prediction to 3.32 years and improved Pearson's r to 0.96 on 154 healthy controls (HCs). Strong generalizability of our model was also validated across different centers. Second, network occlusion sensitivity analysis (NOSA) was adopted to interpret our model and capture the age-specific pattern of brain aging. Regions contributing significantly to brain age were different for HCs and patients with mild traumatic brain injury (mTBI) in different life stages but all within the subcortical areas throughout the lifespan. Left hemisphere was confirmed to be more contributed in the brain age prediction throughout the lifespan. Our research showed that increased brain predicted age gap (brain-PAG) in 98 acute mTBI patients was highly correlated with cognitive impairment and higher level of plasma neurofilament light, a marker of neurodegeneration. The higher brain-PAG also showed a longitudinal and persistent nature in patients with follow-up examination. The interpretable framework might also provide hope for testing the performance of related drugs or treatments.
Sandwiched Compression: Repurposing Standard Codecs with Neural Network Wrappers
Onur G. Guleryuz

Onur G. Guleryuz

and 8 more

February 12, 2024
We propose sandwiching standard image and video codecs between pre-and post-processing neural networks. The networks are jointly trained through a differentiable codec proxy to minimize a given rate-distortion loss. This sandwich architecture not only improves the standard codec's performance on its intended content, it can effectively adapt the codec to other types of image/video content and to other distortion measures. Essentially, the sandwich learns to transmit "neural code images" that optimize overall rate-distortion performance even when the overall problem is well outside the scope of the codec's design. Through a variety of examples, we apply the sandwich architecture to sources with different numbers of channels, higher resolution, higher dynamic range, and perceptual distortion measures. The results demonstrate substantial improvements (up to 9 dB gains or up to 30% bitrate reductions) compared to alternative adaptations. We derive VQ equivalents for the sandwich, establish optimality properties, and design differentiable codec proxies approximating current standard codecs. We further analyze model complexity, visual quality under perceptual metrics, as well as sandwich configurations that offer interesting potentials in image/video compression and streaming.
Novel Method for Real-Time Human Core Temperature Estimation using Extended Kalman Fi...
Rojan Aslani
duarte.f.dias

Rojan Aslani

and 3 more

February 12, 2024
The gold standard methods for real-time core temperature (CT) monitoring are invasive and cost-inefficient. The application of Kalman filters for an indirect estimation of CT has been explored in the literature since 2010. This paper presents a comparative study between different state of the art Extended Kalman Filter (EKF) estimation algorithms and a new approach based on a biomimetic human body response pre-emptive mapping concept. In this new method, a mapping model of the physiological response of the heart rate (HR) change to CT increase is pre-applied to the input of the EKF estimation CT procedure in a near real-time manner. The algorithm was trained and tested using two datasets (total participants = 18). The best performing algorithm with this novel pre-emptive mapping achieved in an average Root Mean Squared Error (RMSE) of 0.34°C while the best state of the art EKF model (without pre-emptive mapping) resulted in a RMSE of 0.41°C, leading to a 17% improvement performance of our novel method. Given these favorable outcomes, it is compelling to assess its efficacy on a larger dataset in the near future.
Estimating Gait Events and Speed in the Real World with a Head-Worn IMU
Paolo Tasca

Paolo Tasca

and 4 more

January 29, 2024
Recently, head-worn inertial sensors have been proposed to characterize gait. However, only few methods allow for both initial foot contacts detection and stride-by-stride gait speed estimation, and none of them has been validated in real-world settings. In this study, we assessed the performance of a two-step machine learning algorithm to estimate initial foot contacts and speed in realworld conditions with a single inertial sensor attached to the temporal region of the head. A deep learning convolutional network is used to detect gait cycles. Then, gait speed is inferred from the detected gait cycles by a regression model. The models were trained and validated against a multisensor wearable system with data of 15 healthy young adults during both structured and real-world walking trials. The stride detector achieved a high F1-score (> 92%) and a mean absolute error smaller than 40 ms. High correlation between target and predicted speed values (Spearman coefficient > 0.86) and low mean absolute error (< 0.08 m/s) were observed. These findings pave the way to the establishment of gait analysis frameworks based on the integration of inertial sensors with head-worn devices.
AT-2FF: Adaptive Type-2 Fuzzy Filter for De-noising Images Corrupted with Salt-and-Pe...
Vikas Singh

Vikas Singh

January 26, 2024
The demand for more accurate and visually pleasing images is increasing with the number of digital photos taken daily. However, the images captured by modern cameras are degraded by noise, which leads to deteriorated visual image quality. The de-noising approaches existing in the literature cannot reduce noise without losing image features (edges, corners, and other sharp structures). The developed de-noising method can detect and de-noise the noisy pixels without losing image features at higher and lower noise levels. The present approach can filter the noise at or above 90% noise density and preserve image details to enhance the image processing task.
NisI: A Tool for Non-Ideal System Identification
Jeferson José de Lima

Jeferson José de Lima

and 5 more

February 20, 2024
Non-Ideal System Identification (NisI) is a Python code developed for system identification of non-ideal systems using a single observed state. Non-Ideal System (NIS) exhibits significantly complex dynamic behavior due to the presence of nonlinear and discontinuous equations, resulting in chaotic behavior. Identifying parameters for NIS is more challenging when only certain states of the system are observable. The proposed method in this study uses a Particle Swarm Optimization (PSO) meta-heuristic to minimize the difference between experimental data and a proposed nonlinear model. This code has successfully identified all unknown parameters of the non-ideal system in experimental systems.
Comprehensive Link-Level Simulator for Terahertz MIMO Integrated Sensing and Communic...
Hanchen Shi
Chuang Yang

Hanchen Shi

and 2 more

January 26, 2024
Terahertz (THz) integrated sensing and communication (ISAC) with multiple-input multiple-output (MIMO) architecture is recognized as a promising interdisciplinary technology for ultra-high-rate mobile communications since the systems enable narrow beam tracking which is necessary in the THz band. In this work, a link-level simulator for THz MIMO ISAC in time-division duplex (TDD) operation is proposed to design and analyze mobile systems. Compared to the simulators in the literature, the proposed simulator is more practical and comprehensive, employing two-dimensional motion simulation instead of numerical evaluation, and considering THz characteristics such as wideband echo, multipath components and molecular absorption. Specifically, the simulator supports the standard orthogonal frequency division multiplexing (OFDM) and discrete Fourier transform spread OFDM (DFT-s-OFDM) waveforms for sensing and communication simultaneously. Trade-offs between communication and sensing metrics required for waveform numerology design are investigated. In particular, by exploiting TDD framework's integration capability, range-velocity-angle estimation with virtual array and sensing-aided downlink spatial multiplexing are co-designed. Additionally, a user interface with elaborate parameter configuration is introduced. Finally, we implement an urban vehicle-to-vehicle (V2V) application case to verify the simulator. The simulation results present the feasibility of the developed integrated architecture.
← Previous 1 2 3 4 5 6 7 8 9 … 77 78 Next →
Back to search
Authorea
  • Home
  • About
  • Product
  • Preprints
  • Pricing
  • Blog
  • Twitter
  • Help
  • Terms of Use
  • Privacy Policy