Showing 41 - 60 results of 124 for search '"parallel computing"', query time: 0.04s Refine Results
  1. 41

    FPGA-Based Distributed Union-Find Decoder for Surface Codes by Namitha Liyanage, Yue Wu, Siona Tagare, Lin Zhong

    Published 2024-01-01
    “…The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. …”
    Get full text
    Article
  2. 42

    A Cyber-ITS Framework for Massive Traffic Data Analysis Using Cyber Infrastructure by Yingjie Xia, Jia Hu, Michael D. Fontaine

    Published 2013-01-01
    “…As a solution to the problems, this paper proposes a Cyber-ITS framework to perform data analysis on Cyber Infrastructure (CI), by nature parallel-computing hardware and software systems, in the context of ITS. …”
    Get full text
    Article
  3. 43

    IMPLEMENTATION OF THE PARALLEL ALGORITHM OF NONISOTHERMAL HEAT AND MOISTURE MIGRATION TASK SIMULATION IN NATURAL DISPERSE ENVIRONMENTS by P. K. Shalkevich, S. P. Kundas, I. A. Gishkeluk

    Published 2016-10-01
    “…The results of the developed algorithm of parallel computing program implementation of non-isothermal moisture transfer in natural dispersed environments in the Matlab package are considered. …”
    Get full text
    Article
  4. 44

    On the calculation of integer sequences, associated with twin primes by Igoris Belovas, Martynas Sabaliauskas, Paulius Mykolaitis

    Published 2023-11-01
    “…Using the probabilistic Miller–Rabin primality test and parallel computing technologies, the distribution of prime pairs in the intervals (2n; 2n+1] is studied experimentally. …”
    Get full text
    Article
  5. 45

    Hadoop bottleneck detection algorithm based on information gain by Zaole TAN, Zhifeng HAO, Ruichu CAI, Xiaojun XIAO, Yu LU

    Published 2016-07-01
    “…Hadoop has become a major platform for big data storage and large data mining nowadays.Although Hadoop platform achieves high performance parallel computing through a distributed cluster of machines,the bottlenecks will inevitably appear on a machine when cluster load increases,because the cluster is composed of inexpensive host.Aiming at this problem,a bottleneck detection algorithms based on information gain was proposed.The algorithm detected cluster's bottlenecks resource by computing the information gain of each resource.The experiments show that the bottleneck detection algorithm is feasible.…”
    Get full text
    Article
  6. 46

    Interactive Visualization Platform Based on MapReduce by Jialiang Wang, Bo Qin, Jianjian Liu, Ni Liu

    Published 2012-09-01
    “…It inserts GPU, MPI parallel computing into MapReduce mechanism of Hadoop to realize the parallel processing of large-scale ocean environment data sets, such as data retrieval, data extraction, data interpolation, analysis of characteristics' visualization, so that the massive data's visualization can be processed in the remote interactive way. …”
    Get full text
    Article
  7. 47

    Optimization of Communication Path Planning Method for Low Earth Orbit Constellation Based on Dijkstra Algorithm by YIN Shuming, XUE Chengcheng, HAO Liyun, ZHANG Xinjun

    Published 2024-09-01
    “…In this method, weighted graphs were used to characterize interconnections between satellites, and Dijkstra algorithm was improved to implement parallel computing and adapt to dynamically changing networks of low earth orbit constellations. …”
    Get full text
    Article
  8. 48

    Applying MapReduce frameworks to a virtualization platform for Deep Web data source discovery by XIN Jie, CUI Zhi-ming, ZHAO Peng-peng, ZHANG Guang-ming, XIAN Xue-feng

    Published 2011-01-01
    “…In order to improve the performance of Deep Web crawler in discovering and searching data sources interfaces,a new method was raised to parallel processing the mass data within the Deep Web compromising MapReduce program-ming model and virtualization technology.The new crawling architecture was designed with three producers,the link classified MapReduce,the page classified MapReduce and the form classified MapReduce.Server virtualization was adopted to simulate the cluster environment in order to test the performance.Experiment results indicate that this method is capable for large-scale data parallel computing,can improve the crawling efficiency and avoid wasteful expenditure,which prove the feasibility of applying cloudy technologies into Deep Web data mining field.…”
    Get full text
    Article
  9. 49

    DRAV: Detection and repair of data availability violations in Internet of Things by Jinlin Wang, Haining Yu, Xing Wang, Hongli Zhang, Binxing Fang, Yuchen Yang, Xiaozhou Zhu

    Published 2019-11-01
    “…In this work, the detection and repair of data availability violations (DRAV) framework is proposed to detect and repair data violations in Internet of Things with a distributed parallel computing environment. DRAV uses algorithms in the MapReduce programming framework, and these include detection and repair algorithms based on enhanced conditional function dependency for data consistency violation, MapJoin, and ReduceJoin algorithms based on master data for k -nearest neighbor–based integrity violation detection, and repair algorithms. …”
    Get full text
    Article
  10. 50

    Dual-target WOA spectrum sharing algorithm based on Stackelberg game by Li ZHANG, Tian LIAO, Yejun HE

    Published 2020-09-01
    “…In order to solve the complex spectrum allocation problem,a dual-target whale optimization algorithm (WOA) with strong parallel computing capabilities was introduced,and a Stackelberg game model was proposed that could effectively reflect the actual spectrum requirements,and a dual-target WOA optimized distributed antenna system (DAS) spectrum sharing scheduling algorithm was designed.Simulation results show that performance comparison is performed from multiple indicators such as optimal pricing and user benefits.The proposed algorithm has a good spectrum sharing allocation effect,can achieve fair and effective spectrum allocation,and provides an important reference for the future communication network spectrum sharing mode.…”
    Get full text
    Article
  11. 51

    QGA-based network service extension algorithm in NFV by Hang QIU, Hongbo TANG, Wei YOU, Yu ZHAO, Yi BAI

    Published 2022-11-01
    “…To meet the client’s new business requirements or add additional security protection functions, the already hosted network service extension problem in the cloud network based on network function virtualization was researched.The network service extension in the cloud network was modeled as an integer linear programming, considering the impact on initial service, extended graph deployment, resource capacity and virtual network function affinity constraints, and so on.To deal with the computational complexity and dynamism of future large-scale cloud networks, a QGA-based network service extension algorithm was proposed to improve solution efficiency and solution quality through quantum parallel computing.Simulation results prove that the efficient performance of the proposed algorithm in terms of extension successful ratio and average resource costs, and the proposed algorithm has low time complexity.…”
    Get full text
    Article
  12. 52

    Maximum Mutual Information Feature Extraction Method Based on the Cloud Platform by Shasha Wei, Huijuan Lu, Wei Jin, Chao Li

    Published 2013-10-01
    “…With the large-scale application of gene chip,gene expression data with high dimension which exists a large number of irrelevant and redundant features may reduce classifier performance problem.A maximum mutual information feature extraction method based on cloud platforms was proposed.Hadoop cloud computing platform could be a parallel computing after gene expression data segmentation,features was extracted at the same time combined with the maximum mutual information method and the characteristics of cloud computing platform filter model was realized.Simulation experiments show that the maximum mutual information feature extraction method based on the cloud platform can rapid extraction of features in a higher classification accuracy which save a lot of time resources to make a highly efficient gene feature extraction system.…”
    Get full text
    Article
  13. 53

    Key technology of power big data for global real-time analysis by Guoliang ZHOU, Linjie LV, Guilan WANG

    Published 2016-04-01
    “…The problems of big data safety and reliability,equipment life-cycle management and energy real-time balance scheduling were analyzed and discussed,system analysis precision and accuracy based on large-scale real-time multi-source detail data and global data of equipment would be improved,then application of in-memory computing,real-time streaming data processing technology,massively parallel computing technology and column stores were explored;a layered architecture of power big data analytics platform which combined with the mainstream open source big data processing technology was proposed to provide guarantees for the efficient operation of the power system.…”
    Get full text
    Article
  14. 54

    Efficient and real-time lane detection using CUDA-based implementation by El Boussaki Hoda, Latif Rachid, Saddik Amine

    Published 2024-01-01
    “…To this end, we used CUDA for acceleration, taking advantage of its parallel computing capabilities to improve performance. …”
    Get full text
    Article
  15. 55

    Steel Plate Defect Recognition of Deep Neural Network Recognition Based on Space-Time Constraints by Chi Zhang, Zhiguang Wang, Baiting Liu, Wang Xiaolei

    Published 2022-01-01
    “…In order to process the massive image data stream generated instantaneously and ensure the real-time performance, accuracy, and stability of the detection system, this paper constructs a distributed parallel computing system structure based on the client/server (CC/S) model to obtain an intelligent recognition system. …”
    Get full text
    Article
  16. 56

    Survey of FPGA based recurrent neural network accelerator by Chen GAO, Fan ZHANG

    Published 2019-08-01
    “…Recurrent neural network(RNN) has been used wildly used in machine learning field in recent years,especially in dealing with sequential learning tasks compared with other neural network like CNN.However,RNN and its variants,such as LSTM,GRU and other fully connected networks,have high computational and storage complexity,which makes its inference calculation slow and difficult to be applied in products.On the one hand,traditional computing platforms such as CPU are not suitable for large-scale matrix operation of RNN.On the other hand,the shared memory and global memory of hardware acceleration platform GPU make the power consumption of GPU-based RNN accelerator higher.More and more research has been done on the RNN accelerator of the FPGA in recent years because of its parallel computing and low power consumption performance.An overview of the researches on RNN accelerator based on FPGA in recent years is given.The optimization algorithm of software level and the architecture design of hardware level used in these accelerator are summarized and some future research directions are proposed.…”
    Get full text
    Article
  17. 57

    Pengukuran Performa Apache Spark dengan Library H2O Menggunakan Benchmark Hibench Berbasis Cloud Computing by Aminudin Aminudin, Eko Budi Cahyono

    Published 2019-10-01
    “…This concept is called parallel computing. Apache Spark has advantages compared to other similar frameworks such as Apache Hadoop, etc., where Apache Spark is able to process data in streaming, meaning that the data entered into the Apache Spark environment can be directly processed without waiting for other data to be collected. …”
    Get full text
    Article
  18. 58

    Development and Application of Real-time Dynamic Flood Risk Mapping by TAN Senming, GUO Shuhui, CAO Runxiang

    Published 2024-12-01
    “…In addition, improving high-precision terrain generalization, high-speed parallel computing, and artificial intelligence applications will increase model calculation efficiency. …”
    Get full text
    Article
  19. 59

    NOESIS: A Framework for Complex Network Data Analysis by Víctor Martínez, Fernando Berzal, Juan-Carlos Cubero

    Published 2019-01-01
    “…The proposed framework has been designed following solid design principles and exploits parallel computing using structured parallel programming. …”
    Get full text
    Article
  20. 60

    Analysis of Average Shortest-Path Length of Scale-Free Network by Guoyong Mao, Ning Zhang

    Published 2013-01-01
    “…Computing the average shortest-path length of a large scale-free network needs much memory space and computation time. Hence, parallel computing must be applied. In order to solve the load-balancing problem for coarse-grained parallelization, the relationship between the computing time of a single-source shortest-path length of node and the features of node is studied. …”
    Get full text
    Article