透過您的圖書館登入
IP:3.129.247.196

網際網路技術學刊/Journal of Internet Technology

  • Ahead-of-Print

台灣軟體模擬學會 & Ainosco Press,此刊物暫停授權合作

選擇卷期


已選擇0筆
  • 期刊

Cloud computing enables lots of users to make use of highly scalable, reliable large pool of computing and improved storage resources on demand through Internet. With uninterrupted improvement of cloud environment, the security measures against abnormal activities in public cloud are desirable to be exposed. This emerges significances to construct a component to discover anomalies in cloud. Hence the proposed scheme develops anomaly revealing system named Hypervisor spectator to detect irregularities on virtual network. The Hypervisor spectator is developed with Adaptive Neuro-Fuzzy Inference System (ANFIS) and accomplished using back propagation gradient descent technique in combination with least square method. This component has been trained and examined by using DARPA's KDD cup data set. The result of this work is considered according to training and testing performance. The performance comparisons in terms of false alarm rate and detection accuracy exhibit that proposed model is well designed to detect irregularities in cloud with least error rate and minimum overhead for very large datasets.

  • 期刊
Zhangjie Fu Lili Xia Xingming Sun 以及其他 1 位作者

Outsourcing data storage systems reduce storage costs of IT Enterprises and maintenance for users, which have attracted much attention. It is an acceptable way to use cryptography technologies to ensure privacy preserving and access control in a secure outsourcing data storage scheme. However, if the data only depends on encryption, its copyright cannot be well protected. In this paper, we propose an outsourcing data storage scheme which combines digital watermarking and cryptography technology to support privacy preserving and data hiding. We use the multi-granularity encryption algorithm to preserve the privacy of outsourcing data. The RSA-based proxy re-encryption (PRE) algorithm is used to make the key transportation safe. A modified spatial domain technique (LSB substitution technique) is used to hide data which has high embedding capacity. And the decrypted data containing hiding data is approximate to the original data. Experiments show that our scheme is secure and feasible.

  • 期刊

Information hiding can prevent secret information from being accessible to others who are not specified as the receiver, which has become an important technique in many application areas. However, with the advent of information hiding techniques, various kinds of steganalysis which are the study of detecting secret messages hidden by steganography have developed at the same time. Existing information hiding methods such as modifying-based and generating-based ones modify or generate the covers to hide the secret message, which are difficult to resist the corresponding detecting techniques. In this paper, we proposed a novel information hiding method without modifying the covers or generating unnatural covers. The LSBs of the Unicode of the character in covers are extracted. A secret key shared between the sender and receiver determines the represented message bits. Without modification or generation, the proposed method could theoretically resist current detecting techniques. The experiments are conducted on the databases of 200,000 texts. When each document represents 14 bits, all the messages can be successfully embedded.

  • 期刊

Quantum information hiding will have important researching value in quantum computer era. It is more appropriate for videos to be carriers for information hiding than images because of its bigger information content. A novel quantum video information hiding algorithm is proposed based on improved LSQb and motion vector of quantum video. Based on quantum color video representation QVNEQR, the improved LSQb information hiding algorithm is implemented on the original quantum video domain. Then, the improved LSQb is selected to embed the secret information into the motion vector to make the steganography more secure. Experimental simulations conducted on a simple video demonstrate the efficiency of the proposed approach.

  • 期刊

Remote caring at home is becoming imperative with the rapid increase in ageing population. Wireless Health which enables remote health monitoring (RHM), has emerged as a promising field as a result of recent advances in wireless sensors and communication technologies. Many RHM devices are proposed through the use of advanced technologies of a Wireless Body Area Network in which sensors continuously record vital signs and pass them on to the personal server. Reliable procedures for the transmission of biometric data are of great significance for RHM devices. In this paper, we propose a policy based adaptive scheme for a previously proposed load control scheme. The proposed adaptive load control scheme will adjust the threshold according to the load of the data packets (i.e., emergency data packets and normal data packets), making the performance of the load control scheme even better in emergency cases.

本文另有預刊版本,請見:10.6138/JIT.2017.18.7.20160714
  • 期刊
Jie Wei Shangguang Wang Lingyan Zhang 以及其他 2 位作者

Many factors affect the time cost of Cloud computing tasks. One of the most serious factors is data transmission latency, which reduces the efficiency of Cloud computing. Many notable schemes that have been proposed to overcome this factor ignore the communication cost among virtual machines (VMs) in the MapReduce environment. In this paper, we propose a VM placement approach to reduce data transmission latency by focusing on the communication cost among VMs. In this approach, we first propose two VM placement optimization algorithms to minimize the total data transmission latency and the maximum data transmission latency in the MapReduce environment. Then, we use the algorithms to place VMs for Map and Reduce phase. Finally, we analyze the time complexity for our approach. We implement our approach by simulation. The simulation results show that our approach reduces the average data transmission latency by 26.3% compared with other approaches.

  • 期刊
Li Huang Shenghua Xu Guoxiong Hu 以及其他 2 位作者

In the era of big data, the volumes of data are in increasingly rapid growth in social networks. Social networks are a theoretical construct, which is useful in the social sciences to study relationships and interactions between individuals, group, organizations. Massive data processing is essential for providing social network services. In this paper, we focus on the extraction of the implicit aspect and opinion words in social networks. The Latent Dirichlet Allocation (LDA) model is a generative probabilistic model to automatically extract implicit topic in the document set, which has been widely used in natural language processing, text mining and text categorization. However, a large number of non-taxonomy high-frequency content words in the Chinese patent documents will affect the implicit topic generation, and for the more, affect Chinese patent classification. The study finds that the probability distribution of the words in the expert database has an impact on the extraction of the feature words for patent document. This paper proposes a weight-LDA model for the problem of the LDA topic model in Chinese patent classification. The weight-LDA model, which combines the probability distribution of feature words in the expert database with Gibbs sampling, reduces the impact of non-taxonomy high-frequency content words on the distribution of topic and enhances that of low-frequency content words with strong classification effects on the distribution of topic. Six different types of patent data sets extracted from State Intellectual Property Office of the P.R.C are tested. The average F value of the weight-LDA model is 6% higher than that of the traditional LDA model. In addition, the weight-LDA model is compared with word-frequency- based feature selection methods such as the TFIDF algorithm, and the average F value of the weight-LDA model is 11.4% higher than that of the TF-IDF algorithm. Through the analysis of the experimental results, the weight-LDA for the Chinese patent has better classification effects.

  • 期刊

Multi-tenancy control is one of the most key issues of core technologies in cloud computing services. That is, many users can access different applications and resources under the cloud computing environment where many tenants use the databases and applications simultaneously; the amount of data is usually very huge and time-consuming. In this paper, we utilize identity management and role-based access control to propose a new scheme under multitenant cloud computing services, called RB-MTAC. The RB-MTAC can make various users have designated roles, and different roles have respective functions and permissions in cloud services. To be compared with the existing UBAC system, RB-MTAC has average improvement ratios in response time of 46.3%, throughput of 7.2%, and data overhead of 17%. When the cloud users has more than one thousand users, the RBMTAC can get better response time and higher throughput and also lower data overhead to make the multi-tenancy access control system more effective and efficient on the cloud. The cost of computing resources can also be saved when multi-tenant's database is shared, but more attentions should be paid on the secure cloud-based system design and relevant privacy issues. In the future, the proposed RB-MTAC will be employed in various cloud computing service models under cloud MTA environment.

  • 期刊
Zhi Wang Meiqi Tian Xiao Zhang 以及其他 4 位作者

Botnet is one of the most significant threats for Internet security. Machine learning has been widely deployed in botnet detection systems as a core component. The assumption of machine learning algorithm is that the underlying data distribution of botnet is stable for training and testing, however which is vulnerable to well-crafted concept drift attacks, such as mimicry attacks, gradient descent attacks, poisoning attacks and so on. So, machine learning itself could be the weakest link in a botnet detection system. This paper proposes a hybrid learning system that combines vertical and horizontal correlation models based on statistical p-values. The significant diversity between vertical and horizontal correlation models increases the difficulty of concept drift attacks. Moreover, average p-value assessment is applied to fortify the system to be more sensitive to hidden concept drift attacks. SIM and DIFF assessments are further introduced to locate the affected features when concept drift attacks are recognized, then active feature reweighting is used to mitigate model aging. The experiment results show that the hybrid system could recognize the concept drift among different Miuref variants, and reweight affected features to avoid model aging.