Jayanthi Raghavan and Majid Ahmadi, Department of Electrical and Computer Engineering, University of Windsor, Windsor, Canada
In this work, deepCNN based model have been suggestedfor face recognition. CNN is employed to extract unique facial features and softmax classifier is applied to classify facial images ina fully connected layer of CNN. The experiments conducted in Extended YALE B and FERET databases for smaller batch sizes and low value of learning rate, showed that the proposed model has improved the face recognition accuracy. Accuracy ratesof up to 96.2%is achieved using the proposed model in Extended Yale B database. To improve the accuracy rate further, preprocessing techniques like SQI, HE, LTISN, GIC and DoG are applied to the CNN model. After the application of preprocessing techniques, the improved accuracy of 99.8%is achieved with deep CNN model for the YALE B Extended Database. In FERET Database with frontal face, before the application of preprocessing techniques, CNNmodel yields the maximum accuracy of 71.4%.After applying the above-mentioned preprocessing techniques, the accuracy is improved to 76.3%
CNN, ANN, GPU
Areej Salaymeh1, Loren Schwiebert1 and Stephen Remias2, 1Department of Computer Science, Wayne State University, Detroit, USA, 2 Civil and Environmental Engineering, Wayne State University, Detroit, USA
Designing efficient transportation systems is crucial to save time and money for drivers and for the economy as whole. One of the most important components of traffic systems are traffic signals. Currently, most traffic signal systems are configured using fixed timing plans, which are based on limited vehicle count data. Past research has introduced and designed intelligent traffic signals; however, machine learning and deep learning have only recently been used in systems that aim to optimize the timing of traffic signals in order to reduce travel time. A very promising field in Artificial Intelligence is Reinforcement Learning. Reinforcement learning (RL) is a data driven method that has shown promising results in optimizing traffic signal timing plans to reduce traffic congestion. However, model-based and centralized methods are impractical here due to the high dimensional state-action space in complex urban traffic network. In this paper, a model-free approach is used to optimize signal timing for complicated multiple four-phase signalized intersections. We propose a multi-agent deep reinforcement learning framework that aims to optimize traffic flow using data within traffic signal intersections and data coming from other intersections in a Multi-Agent Environment in what is called Multi-Agent Reinforcement Learning (MARL). The proposed model consists of state-of-art techniques such as Double Deep Q-Network and Hindsight Experience Replay (HER). This research uses HER to allow our framework to quickly learn on sparse reward settings. We tested and evaluated our proposed model via a Simulation of Urban MObility simulation (SUMO). Our results show that the proposed method is effective in reducing congestion in both peak and off-peak times.
Multi-agent, Deep learning, Traffic signal timing, Reinforcement learning.
Mohamed Ben Ali, Ons Wechtati, Jean Pierre Lorrre, Yazid Benazzouz, and Sarah Zeribi, Linagora Labs, 75 route de Revel 31000 Toulouse, France
Despite recent advances and achievements in the field of visual object detection, processing panoramic images raises some dificulties, such as the appearance of objects in both sides of the image, depth difference, the overlap of objects and detection dependencies regards the camera's disposal. This paper presents an evaluation of deep learning models for the detection of people and gestures taking into account environmental variations during image capture, e.g. light and shooting. This study enabled relevant choices to be made on the models and successfully developed a prototype application for monitoring and providing services during work meetings.
Object recognition, deep learning, panoramic image, artificial intelligence, people tracking, meeting monitoring.
Qianwei Cheng1, AKM Mahbubur Rahman2, Anis Sarker2, Abu Bakar Siddik Nayem2, Ovi Paul2, Amin Ahsan Ali2, M Ashraful Amin2, Ryosuke Shibasaki1, and Moinul Zaber3,4, 1University of Tokyo, 2Agency Lab, Independent University Bangladesh, 3Data and Design Lab, 4UNU-EGOV, United Nations University
Rapid globalization and the interdependence of humanity that engender tremendous in-flow of human migration towards the urban spaces. With advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful classification of the space that is usable for data collection, analysis and visualization. In this paper we propose a novel classification method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Classification to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh where the surrounding environment is crucial for the classification. Moreover, the traditional classifications propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km×50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The classification is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four classifications: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total sixteen sub-classifications were identified. For semantic segmentation and automatic classification, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU).
Remote Sensing, Satellite Image, Building classification, Urban Environment, Deep Learning, Semantic Segmentation, Urban Planning, Socio-economic situation, Poverty Estimation.
Sadek Mansouri, Tunisia
Automatic handwriting recognition is a useful task for many applications. The main Research has focused on the Latin languages. However, few approaches haves been proposed for the Arabic language due the specific and complex features for handwritten Arabic text. In this paper, we propose a deep learning approach for handwritten Arabic characters recognition using a new model of convolutional neural networks (CNN). This model was trained and tested on the Arabic Handwritten Character Dataset (AHCD). Obtained results prove the efficiency of the proposed CNN, achieving accuracy of 96.04 % and outperforming other CNN models in the literature.
Convolutional Neural Network, AHCD, Handwritten Arabic Characters recognition, deep learning.
Emma Yang1 and Markus van Almsick2, 1The Brearley School, New York, USA, 2Wolfram Research Inc., Champaign, Illinois, USA
In many ways, neural networks are designed to mimic the human visual system, especially in the first convolution layer. Furthermore, first convolutional layers appear to be the same regardless of the vision task. We determine these features of the first layer using biological and mathematical considerations and exempt these features from the training process via back propagation. We design artificial receptive fields based on the structure of the neural cells in the primary visual cortex by taking inspiration from Gabor and Gaussian kernels that model the receptive fields of these cells. We show that using these artificial receptive fields as fixed weights in the first convolutional layer allows images to be processed without losing any image information. Using these fixed artificial weights allowed for the specific extraction and isolation of certain image features in subsequent layers of the neural network architecture. This ability enables us to more closely and precisely examine how different neural network designs process signals, particularly when it comes to computer vision and image classification applications—feature-specific neural network design. We also demonstrate that using these fixed artificial receptive fields improves the accuracy rate and lessens the training time of the models it is used in. Finally, we explore the resilience of these receptive fields against adversarial images. All approaches presented in this paper were implemented and visualized with Wolfram Mathematica, including the neural network architectures shown in the figures below.
Neural networks, image processing, signal processing, human visual system, receptive fields, Gaussian derivatives, adversarial examples.
Alfredo Silva and Marcelo Mendoza, Federico Santa María Technical University, Santiago, Chile
Word embeddings are vital descriptors of words in unigram representations of documents for many tasks in natural language processing and information retrieval. The representation of queries has been one of the most critical challenges in this area because it consists of a few terms and has little descriptive capacity. Strategies such as average word embeddings can enrich the queries' de- scriptive capacity since they favor the identification of related terms from the continuous vector representations that characterize these approaches. We propose a data-driven strategy to combine word embeddings. We use Idf combinations of embeddings to represent queries, showing that these representations outperform the average word embeddings recently proposed in the literature. Experi- mental results on benchmark data show that our proposal performs well, suggesting that data-driven combinations of word embeddings are a promising line of research in ad-hoc information retrieval.
Word embeddings, information retrieval, query representation.
Mohit Mayank, Tata Consultancy Service, Pune, India
Word embeddings generated from dual embedding space methods introduces a unique variety of model variation which is different from the originally proposed default models. While existing variations are historically explained by hyper-parameters like embedding dimension, context window size and training method, for cue-response type of tasks, factoring the dual embedding vectors is also required. This introduces a variation called the comparison method, where either a transformation or permutation of both embedding weights are used. This paper considers all of these variations to compare two classical embedding methods belonging to two different methodologies - Word2Vec from window-based and Glove from count-based. For an extensive evaluation after considering all variations, a total of 84 different models were compared against semantic, association and analogy evaluations tasks which are made up of 9 open-source linguistics datasets. The final Word2vec reports showcase the preference of non-default model for 2 out of 3 tasks. In case of Glove, non-default models outperform in all 3 evaluation tasks.
Intrinsic evaluations, Model analysis, Word2vec, Glove, Dual word embedding space.
Luis-Gil Moreno-Jiménez and Juan-Manuel Torres-Moreno, Université d’Avignon/LIA (France) & Polytechnique Montréal, Québec (Canada)
In this paper we introduce the Spanish Literary corpus (MegaLite), a new corpus suitable for Natural Language Processing (NLP) tasks, Computational Creativity (CC), Text generation and others. We address the creation of this corpus of literary documents in order to evaluate or design algorithms in text generation, classification, stylometry analysis, sentiment detection, etc. We have constituted this corpus manually. Near of 5,200 works on the genres narrative, poetry and plays constitute this corpus. Some statistics and applications of this corpus are presented and discussed. The documents in MegaLite corpus will be available to the community as a free resource, under an adequate format.
Emotion Corpus, Spanish Literary Corpus, Learning algorithms, Linguistic resources.
John Kalung Leung1, Igor Griva2 and William G. Kennedy3, 1Computational and Data Sciences Department, Computational Sciences and Informatics, College of Science, George Mason University, 4400 University Drive, Fairfax, Virginia 22030, USA, 2Department of Mathematical Sciences, MS3F2, Exploratory Hall 4114, George Mason University,4400 University Drive, Fairfax, Virginia 22030, USA, 3Center for Social Complexity, Computational and Data Sciences Department, College of Science, George MasonUniversity, 4400 University Drive, Fairfax, Virginia 22030, USA
This paper introduces an ingenious text-based affective aware pseudo association method (AAPAM) to connect disjoint users and items across different information domains and leverage them to make cross-domain content-based and collaborative filtering recommendations. This paper demonstrates that the AAPAM method could seamlessly join different information domain datasets to act as one without any additional cross-domain information retrieval protocols. Besides making cross-domain recommendations, the benefit of joining datasets from different information domains through AAPAM is that it eradicates cold start issues while making serendipitous recommendations.
Behavioral Analysis, Emotion-aware Recommender System, Emotion prediction, Personality, Pseudo Users Association.
Aman Pathak, Department of Computer Science Engineering, Medi-Caps University, Indore, India
Natural language processing (NLP) has witnessed many substantial advancements in the past three years. With the introduction of the Transformer and self-attention mechanism, language models are now able to learn better representations of the natural language. These attention-based models have achieved exceptional state-of-the-art results on various NLP benchmarks. One of the contributing factors is the growing use of transfer learning. Models are pre-trained on unsupervised objectives using rich datasets that develop fundamental natural language abilities that are fine-tuned further on supervised data for downstream tasks. Surprisingly, current researches have led to a novel era of powerful models that no longer require fine-tuning. The objective of this paper is to present a comparative analysis of some of the most influential language models. The benchmarks of the study are problem-solving methodologies, model architecture, compute power, benchmark accuracies and shortcomings.
Natural Language Processing, Transformers, Attention-Based Models, Representation Learning, Transfer Learning.
David Noever1, Josh Kalin2, Matt Ciolino1, Dom Hambrick1, and Gerry Dozier2, 1PeopleTec, Inc., 4901 Corporate Drive. NW, Huntsville, AL, USA, 2Department of Computer Science and Software Engineering, Auburn University, Auburn, AL, USA
Taking advantage of computationally lightweight, but high-quality translators prompt consideration of new applications that address neglected languages. Locally run translators for less popular languages may assist data projects with protected or personal data that may require specific compliance checks before posting to a public translation API, but which could render reasonable, cost-effective solutions if done with an army of local, small-scale pair translators. Like handling a specialist’s dialect, this research illustrates translating two historically interesting, but obfuscated languages: 1) hacker-speak (“l33t”) and 2) reverse (or “mirror”) writing as practiced by Leonardo da Vinci. The work generalizes a deep learning architecture to translatable variants of hacker-speak with lite, medium, and hard vocabularies. The original contribution highlights a fluent translator of hacker-speak in under 50 megabytes and demonstrates a companion text generator for augmenting future datasets with greater than a million bilingual sentence pairs. A primary motivation stems from the need to understand and archive the evolution of the international computer community, one that continuously enhances their talent for speaking openly but in hidden contexts. This training of bilingual sentences supports deep learning models using a long short-term memory, recurrent neural network (LSTM-RNN). It extends previous work demonstrating an English-to-foreign translation service built from as little as 10,000 bilingual sentence pairs. This work further solves the equivalent translation problem in twenty-six additional (non-obfuscated) languages and rank orders those models and their proficiency quantitatively with Italian as the most successful and Mandarin Chinese as the most challenging. For neglected languages, the method prototypes novel services for smaller niche translations such as Kabyle (Algerian dialect) which covers between 5-7 million speakers but one which for most enterprise translators, has not yet reached development. One anticipates the extension of this approach to other important dialects, such as translating technical (medical or legal) jargon and processing health records or handling many of the dialects collected from specialized domains (mixed languages like “Spanglish”, acronym-laden Twitter feeds, or urban slang).
Recurrent Neural Network, Long Short-Term Memory (LSTM) Network, Machine Translation, Encoder-Decoder Architecture, Obfuscation.
Qi Zhai, Zhigang Kan, Linhui Feng, Linbo Qiao, and Feng Liu, College of Computer, National University of Defense Technology ChangSha 410073, China
Recently, Chinese event detection has attracted more and more attention. However, most existing event detection works are designed to process English unstructured text, which is less effective on Chinese event detection task due to the huge difference between Chinese and English. Actually, as a special kind of hieroglyphics, Chinese characters and glyphs are semantically useful but still unexplored in the task of event detection. To that end, in this paper, we propose a novel Glyph-Aware Fusion Network, named GlyFN to introduce the characters and glyphs’ representation into contextual semantic representation based pre-trained language model. For better integrating the above two representations and then implementing Chinese event detection, we design a Vector Linear Fusion mechanism to take full advantage of Chinese characters, character-level glyphs, context-level characters simultaneously. Specifically, it utilizes a max-pooling mechanism to capture the salient and important information of the two feature representations. Further, we use the linear operation of vectors to retain the unique information in their feature space. The resulting fusion is thus obtained two features of important information and retains its proprietary representation. Besides, for large-scale unstructured text data in the real world, we distribute data to different clusters. Multiple clusters extract events quickly and efficiently in a parallel manner. Finally, extensive experiments are conducted on ACE2005 Chinese corpus as well as large-scale data of the real world. Experimental results show that our model obtains an increase of 7.48 (10.18%) and 6.17 (8.7%) in F1-score in event trigger identification and classification over the state-of-the-art method, respectively. Furthermore, with the distributed accelerated framework, event detection task on large-scale unstructured text could be done efficiently.
Distributed Chinese Event Detection, Fusion Network, Glyph.
Matthew Schofield1, Gulsum Alicioglu2, Russell Binaco1, Paul Turner1, Cameron Thatcher1, Alex Lam1 and Bo Sun1, 1Department of Computer Science, Rowan University, Glassboro, New Jersey, USA, 2Department of Electrical and Computer Engineering, Rowan University, Glassboro, New Jersey, USA
Malicious software is constantly being developed and improved, so detection and classification of malicious applicationsis an ever-evolving problem. Since traditional malware detection techniques fail to detect new or unknown malware, machine learning algorithms have been used to overcome this disadvantage. We present a Convolutional Neural Network (CNN) for malware type classification based on the Windows system API (Application Program Interface) calls. This research uses a database of 5385 instances of API call streamslabeled with eight types of malware of the source malicious application. We use a 1-Dimensional CNN by mapping API call streams as categorical and term frequency-inverse document frequency (TF-IDF) vectors respectively.We achieved accuracy scores of 98.17% using TF-IDF vector and 95.40% via categorical vector. The proposed 1-D CNN outperformed other traditional classification techniques with overall accuracy score of 91.0%.
Convolutional Neural Network, Malware Classification, Windows API Calls, Term Frequency-Inverse Document Frequency Vectors.
Haqi Khalid1*, Shaiful Jahari Hashim1, Sharifah Mumtazah Syed Ahmad1, Fazirulhisyam Hashim1 and Muhammad Akmal Chaudary2, 1Department of Computer and Communication Systems Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia., 2Department of Electrical Engineering, College of Engineering, Ajman University, Ajman, United Arab Emirates
The authenticating process is an essential mechanism for authenticating each other in the open network environment in which two communication sides are authenticated. However, medical sensor network transmitting a sensitive information using wireless body sensors, therefore, to satisfy the requirements of suck a practical application, many authentication schemes using different methods limited to single server have been proposed. Absence of an authentication scheme supports multi-server network in such a wide development in distributed server is still an issue. So, we propose a secure authentication scheme in a multi-server environment using wireless medical sensor network. The scheme implemented with smartcard, password and user identity. It offers good protections against replies, impersonation and privileged insider attacks. Also, secure the communication in multiple parties that communicate each other.
Authentication, Security, WSN, Multi-server, WMSN.
Wosah Peace Nmachi and Thomas Win, School of Computing & Engineering University of Gloucestershire, Park Campus Cheltenham, GL50 2RH United Kingdom
Email is a channel of communication which is increasingly used by individuals and organisations for exchange of information. It is considered to be a confidential medium of communication but this is no longer the case as attackers send malicious emails to users to deceive them into disclosing their private personal information such as username, password, and bank card details, etc. In search of a solution to combat phishing cybercrime attacks, different approaches have been developed. However, the traditional exiting solutions have been limited in assisting email users to identify phishing emails from legitimate ones. This paper reveals the different email and website phishing solutions in phishing attack detection. It first provides a literature analysis of different existing phishing mitigation approaches. It then provides a discussion on the limitations of the techniques, before concluding with an exploration into how phishing detection can be improved.
Cyber-security, Phishing Email Attack, Deep Learning, Stylometric Analysis.
Annapurna P Patil, Harivind Premkumar, Kiran M.H.M, Pranav Hegde, Ramaiah Institute of Technology, MSR Nagar, Bangalore-560054, Karnataka, India
With the current advances in networking and computer network usage in different sectors of technology, network security plays a prime role in enabling networks' proper functioning by detecting and preventing attacks. This paper proposes an architecture using the Snort IDS/IPS and machine learning to build an Intelligent Network Intrusion Detection and Prevention System with dynamic rule updates, creating a robust and secure system with reduced resource consumption that can be used in Domestic Networks. JARVIS, the proposed system, detects malicious patterns in real-time traffic data and takes action by dynamically updating Snort rules. By deploying a machine learning model in parallel and dynamically enabling rules, Snort's resource consumption can be optimized. The model detects attacks and suggests rules that can be deployed on Snort to prevent the attack. JARVIS also provides a web interface where the user can view Traffic Data, Detected Attacks and take necessary actions.
Snort, Intrusion Prevention System, Cyber Security, Machine Learning, DoS Attack.
Alessio Bonti, Akanksha Saini, Thien Pham, Mohamed Abdelrazek, Lorenzo Pinto, Deakin University, Burwood, Melbourne, Victoria, Australia
The data economy is predicted to boom and become a 156B dollars business by 2025. In this demo we introduce the use of distributed ledger technologies (DLT) applied to digital surveys in order to create an ecosystem where data becomes a central piece of a complex economy. Our system allows for interesting key features; ownership, traceability, secure profiles, and anonymity where required. Also, the most important feature, is the incentive mechanism that rewards all participants, both users creating surveys and those answering the surveys. DSurvey (decentralized survey) is a novel application framework that aims at moving away from the large commercial data sink paradigm whose business is restricted to gathering data and reselling it. Our solution makes so that no central data sink exists, and it always belongs to the creator, who are able to know who is using it, and receive royalties.
Decentralized survey, data ownership, incentive mechanism, ICO, blockchain.
Qi Li1, 2, Chenglei Peng1, 2, Yazhen Ma3, Sidan Du1*, Bin Guo3*, Yang Li1*, 1School of Electronic Science and Engineering, Nanjing University, China, 2Nanjing Institute of Advanced Artificial Intelligence, China, 3Affiliated Taikang Xianlin Drum Tower Hospital, Medical School of Nanjing University, China
Diabetic retinopathy (DR) is one of the leading causes of preventable blindness. It‘s urgent to develop reliable methods for auto DR screening, the key of which is detection of lesions. This paper presents an innovative method to detect two early DR lesions micro-aneurysm and hemorrhage. We design a multiscale Convolution Neural Network model that accepts image patches extracted from fundus image of multiple scales with complementary information. These patches are fed into the VGG-16 model as feature extractor. Feature vectors of different scales are fused to a global vector better representing the input fundus image. Experiments are carried out on both local and public datasets. Results show that multiscale CNN model outperforms single-scale model for a 15% increase of sensitivity. The best result is 95.6% sensitivity, 92.9% precision, and 94.2% F1-score on local dataset, and 95.2% sensitivity, 87% precision on public dataset, proving better than other approaches.
Medical image processing; Diabetic retinopathy; Lesion detection; Multi-scale CNN; Computer-aided diagnosis
Jessica Salinas, School of Sciences and Engineering, Tecnologico de Monterrey, Monterrey, Mexico
Transcription factor binding sites are DNA regions involved in the regulation of gene expression. These elements are usually comprised of short, degenerate sequences, making their correct identification a complex task to achieve experimentally. Computational methods comprising statistical techniques could provide a solution for this problem. The genes involved in folate biosynthesis pathway in plants are an example of DNA sequences that have not been broadly explored in terms of regulation. In this study, sequences of genes involved in folate biosynthesis were explored by computational means to predict transcription factor binding sites within the pathway. Folate biosynthesis and one-carbon metabolism genes in 19 different plant species were analyzed using the Analysis of Motif Enrichment (AME) tool. These findings were associated with their biological role and were integrated into a regulatory network. The results from this work could provide an insight into the transcriptional regulation of the folate biosynthesis pathway in plants.
Bioinformatics, Transcription Factor Binding Sites, Motifs, Folate Biosynthesis, Plants.
Tingyao Xiong1 and Jonathan I. Hall2, 1Department of Mathematics and Statistics, Radford University, Radford, VA 24142, USA, 2Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA
Finding binary sequences with Large SHG ratios is very important in the field of ultrafast science, biomedical optics, high-resolution microscopy and label-free imaging. In this paper, we have demonstrated the relation between the SHG contrast ratio and the traditional Merit Factor values. And in the light from known results in Merit Factor Problems, we have shown that Legendre Sequences or Jacobi Sequences, are still the best candidates to obtain binary sequences with large SHG contrast ratios. The authors also discussed the SHG behaviour on some sequences obtained from cyclotomic classes over the finite field GF(2l).
Thidawan Klaysri, Computer Science, RMUTP, Bangkok, Thailand
Customer engagement in Facebook fan page of a brand can be rationalized as a network from customer reactions towards the moderator postings. In this paper the network of consumers connected by the posts of two supermarket chains are represented in different forms of graphs. Here a graph analytic framework, which adopts the concept of Social Network Analysis to examine the structure of the graphs, is presented. The graph filtering methods, k-core, m-core (or m-slice) and km-core, a combination of the former cores, are utilized to examine the customer engagement behavior, to identify and to filter the consumer communities. For both supermarket brands, most of the customer attitudes toward the advertising and promotion posts are positive. Their customer engagement behaviors are similar, in that the majority of customers are engaged by a single post advertising a discount promotion, greater than 90%, following power-laws with respect to the threshold of consumer degree and co-reaction posts. There are around 3% of customers consuming both brands.
Facebook graph analysis, k-core, m-core, Social Network Analysis.
Hamza Chehili1 and Mustapha Bensaada2, 1Frères Mentouri University - Constantine 1, LIRE Laboratory - Constantine 2, Algeria, 2Abbas Laghrour University, Khenchela, Algeria
The emergency of the Covid 19 pandemic has led technology to seek solutions to the different problems caused by the disease. In the monitoring area, connected devices offer new possibilities to a rapid detection and intervention of the new cases. They allow remote diagnostic to infected patients with covid 19 symptoms. However, the heterogeneity of the platform requires applications' developers to develop specific solutions for each platform. In this paper, we propose a cross-platform application that permits developer to use one code to build applications in different platforms. The paper describes the architecture of the application by presenting its three parts: interface screens (UI), data manipulation and authentication implementation. Finally, we show selected screens of an android release as an example.
Cross-Platform, Covid 19, Application, Development.
Copyright © CNSA 2021