July 29 ~ 30, 2023, London, United Kingdom
Xu Lin1,*, Heng Li1,*, Yukun Qian1, Yun Lu2 and Mingjiang Wang1, 1College of Electronic and Information Engineering, Harbin Institute of Technology (Shenzhen), Shenzhen, China, *These authors contributed to the work equally and should be regarded as co-first authors and 2School of Computer Science and Engineering, Huizhou University, Huizhou, China
Sleep apnea syndrome (SAS) is a dangerous and high incidence sleep disorder. As more and more people are affected by SAS, monitoring SAS in family life becomes increasingly important. It is very meaningful to design an automatic SAS monitoring device. We designed a SAS detection model based on neural network, and transplanted the model to an application specific integrated circuit (ASIC). There are many problems in the process of transplanting neural network to ASIC. One of the most serious problems is the transplantation of nonlinear activation functions. We proposed a software-hardware joint optimization method to solve the problem of activation function in the SAS model transplantation. In the process of building the neural network model, our research modified the activation function in the traditional LSTM model and the attention mechanism, and adopted hard-sigmoid and Leaky-ReLU activation function for digital circuits. In the hardware transplantation, the activation function was constructed using the binary shift-based division, three-segment and two-segment function. A digital circuit without migration errors can be obtained with a small area and time consumption. We jointly optimized the structure of the model design and the digital circuit implementation. The model was more suitable for the digital circuit structure, so that the neural network can be transplanted more smoothly.
Sleep apnea detection, Neural network, Hardware transplantation, Activation function.
Victoria Andrea Cotella, Department of Architecture, University of Naples Federico II, Naples, Italy
In recent years, interest in the automatic semantic segmentation of 3D point clouds using machine and deep learning (ML/DL) has grown due to its fundamental role in scene understanding in various computer vision, robotics and remote sensing applications. In the architecture, engineering and construction (AECO) sector, Building Information Modelling (BIM) has become a standard approach to design and the use of 3D point clouds is currently the basis for the creation of as-built BIM models. Today, there is a research gap concerning the interface between point cloud segmentation and the Historical BIM process: there are no consistent studies demonstrating the possibility of automating the modelling of BIM families from the result obtained in the segmentation process in terms of geometry and semantic labels. Based on these assumptions, the present research aims to conduct a systematic review of the state of the art, including both empirical and conceptual studies, with the goal of offering a constructive synthesis that will provide a starting point for the development of innovative approaches in the field of BIM and AI.
Artificial Intelligence, 3D Point cloud, HBIM, Cultural Heritage, Digitalisation.
Simisani Ndaba, Department of Computer Science, Faculty of Science, University of Botswana
Depression is a prevailing mental disturbance affecting an individual’s thinking and mental development. There have been many researches demonstrating effective automated prediction and detection of Depression. The majority of datasets used suffer from class imbalance where samples of a dominant class outnumber the minority class that is to be detected. This review paper uses the PRISMA review methodology to enlist different class imbalance handling techniques used in Depression prediction and detection research. The articles were taken from information technology databases. The results revealed that the common data level technique is SMOTE as a single method and the common ensemble method is SMOTE, oversampling and under sampling techniques. The model level consists of various algorithms that can be used to tackle the class imbalance problem. The research gap was found that under sampling methods were few for predicting and detecting Depression and regression modelling could be considered for future research.
Depression prediction, Depression detection, Class Imbalance, Sampling, Data Level and Model Level.
Ziqi Liu1, Yujia Zhang2, 1The Governor’s Academy, 1 Elm St, Byfield, MA 01922, 2Computer Science Department, California State Polytechnic University, Pomona, CA91768
Online schooling has become more and more popular during recent years due to COVID-19 . It allowsteaching to continue without in-person contacts. A prominent issue with online schooling is teachers are unabletooversee students’ behavior during class as they would in-person. It has been known that many students tend toloseattention. This can make online schooling less ef ective, causing itci to yield worse results than in-person schooling. In order to tackle this issue, this paper outlines a tool that has been developed to monitor children’s mouse andkeyboard movements during online classes and analyze the data with artificial intelligence to ensure students arefocused in class . For example, if the students are typing and clicking their mouse frequently, then there is ahigher possibility the student is not focused because frequent keyboard and mouse movements might indicate theyare chatting with friends or playing games; on the other hand, if they are attentive in class, there would be lesskeyboard and mouse movements, as they should be taking notes.
Depression prediction, Depression detection, Class Imbalance, Sampling, Data Level and Model Level.
Mahfoudh Batarfi and Manohar Mareboyana, Department Of Computer Science, Bowie State University, Bowie, MD, USA
Features that can be extracted from a single image are very important in 3D face reconstruction neural networks because they provide additional information beyond the image’s size and quality. These features can be used to compensate for the lack of prior knowledge provided by a single 2D image and to overcome the dimensional differences between 2D and 3D. This paper collects features that can be extracted from a single image include facial landmarks, which provide information about the geometric structure of a face, and texture maps, which provide information about the surface properties of a face. Additionally, depth maps, shading information, and albedo maps can be used to understand the 3D structure of the face and how light interacts with it. By using these features, 3D face reconstruction neural networks can create more detailed and accurate 3D models of faces, even when the input image is of low quality or has extreme poses or occlusions.
Landmarks, Depth, Texture, UV, Shading, Albedo,Face Parsing.
Djalal Merad Boudia, Kheira Ziadi and Assia Touati, Department of Computer Science, Ain-Témouchent University, Algeria
Nowadays, in addition to the flexibility, efficiency, speed and comfort of the private car, the spatial dispersion of the habitat and their activities contribute to a considerable growth in traffic and the use of cars. This means of transport becomes the most popular and most preferred by users. For this, the automation of parking management is necessary because the human being is unable to identify, in real time and without mistakes, the cars that enter a safe place. Today, there are many systems of recognition of license plates, these systems have two major axes, which are the detection of the license plate and the recognition of its characters. Our system makes it possible to identify cars in a car park by reading the license plates. It relies on a camera associated with a plate recognition software and a database that contains the list of incoming and outgoing cars. First, pretreatments are applied to facilitate subsequent image analysis. We start with the detection of all areas that could be plates and then a procedure of recognition is applied in order to obtain the registration of the car.
Smart Parking, wireless sensor network, character recognition, image processing, license plate.
Sri Gayathri Devi I, Sowmiya Sree S, Jerrick Gerald, Geetha Palanisamy, College of Engineering, Anna University, Chennai, India
View synthesis allows the generation of new views of a scene given one or more images. Current methods rely on multiple input images which are practically not feasible for such applications. Whereas utilizing a single image to generate the 3D scene is challenging as it requires comprehensive understanding of 3D scenes. To facilitate this, a complete scene understanding of a single-view image is performed using spatial feature extraction and depth map prediction. This work proposes a novel end-to end model, trained on real images without any ground-truth 3D information. The learned 3D features are exploited to render the 3D view. Further, on querying, the target view is generated using the Query network. The refinement network decodes the projected features to in-paint missing regions and generates a realistic output image. The model was trained on two datasets namely RealEstate10K and KITTI containing an indoor and outdoor scene.
3D Scene Rendering, Dif erentiable Renderer, Scene Understanding, Quantized Variational Auto Encoder.
Fatimah Alanazi, Richard Davison, Gary Ushaw, and Graham Morgan, School of Computing, Newcastle University, Newcastle upon Tyne, UK
The detection of deep fakes simulating human faces for potentially nefarious purposes is an ongoing and evolving topic of interest. Research in prosopagnosia, or face-blindness, has indicated that specific parts of the face, and their movement, provide clues for identification to subjects with the condition. This paper outlines studies in the area of detecting and addressing the effects of prosopagnosia. For the first time, we suggest that the findings of these studies could be applied to the detection of deep fake faces, drawing a link between the facial features and movements most useful in combating the effects of prosopagnosia, with the features most productive for analysis in deep fake facial detection.
Deep fake detection, Facial recognition, Prosopagnosia, Deep learning & Biometric
In Cho Cho1, Jae-Kwang Kim2, Yicheng Yang3, Yonghyun Kwon4, and Ashish Chapagain5, 1Department of Civil, Construction, and Environmental Engineering (CCEE), Iowa State University (ISU), Ames, USA, 2Department of Statistics (STAT), ISU, Ames, USA, 3CCEE, ISU, Ames, USA, 4STAT, ISU, Ames, USA, 5CCEE, ISU, Ames, USA
Advancements in machine learning (ML) hinges upon data - the vital ingredient for training. Statistically curing the missing data is called imputation, and there are many imputation theories and tools.Butthey often require difficult statistical and/or discipline-specific assumptions, lacking general toolscapable ofcuring large data. Fractional hot deck imputation (FHDI) can cure data by filling nonresponses with observed values (thus, “hot-deck”) without resorting to assumptions. The review paper summarizes how FHDI evolves to ultra data-oriented parallel version (UP-FHDI).Here, “ultra” data have concurrentlylarge instances (big-n) and high dimensionality (big-p). The evolution is made possible with specialized parallelism and fast variance estimation technique. Validations with scientific and engineering data confirm thatUP-FHDI can cure ultra data(p >10,000& n > 1M) andthe cured data sets can improve the prediction accuracy of subsequent ML. The evolved FHDIwill help promote reliable ML with “cured” big data.
Big Incomplete Data, Fractional Hot-Deck Imputation,Machine Learning, High-Dimensional Missing Data.
Richard Zhang1 and Ang Li2, 1Oakton High School, 2900 Sutton Rd, Vienna, VA 22181 and 2Computer Science Department, California State Polytechnic University, Pomona, CA91768
Oftentimes we lose track of the time we take to skim over a website or article online or we are simply curious about the time it might take for us to read over some text. We might also be curious about our attention span based onthelength or dif iculty of an article. This paper details the development process of an intelligent google chromeextension capable of gathering data from specific articles and processing the data to estimate the amount of timeneeded to read over an article based on the time it took to read similar or dissimilar articles . This applicationtakes into account the length, readability, average word size, and comparisons to other reading times in order toreturn the most accurate time predictions. The benefit of this application is improved time management as anaccurate prediction of time will be provided.
Chrome-extension, Time management, Machine learning, Web scraping.
Abdelghani ZOUADI, Laboratory of Studies and Research in the Sciences of Education, Didactics and Management, Regional Center for Education and Training Careers Daraa-Tafilalet, Errachidia, Morocco
The issue of distance education is of great importance due to the development of mass media and information communication technologies. This importance has grown greater and greater, especially in the period of the coronavirus pandemic (Covid19) which obliged people to stay at home and continue studying online and via different tools. This study aimsis to investigate the trainees perceptions related to distance education at the Regional Center for Education and Training Careers – Daraa Tafilalet (RCETC-DT) during the period of coronavirus (Covid19). The research method included the quantitative approach. The data was collected through a questionnaire from a sample of 41 participants at the department of educational administration and preservice teachers in the RCETC-DT in Errachidia and Ouarzazate, Morocco. The findings indicated that distance education has strengths and weaknesses. They confirmed also that distance education can be more successful if more attention is given to from the official educational authorities.
Distance Education, E-Learning, Information Communication Technology, Education &Training, Educational Administration.
Jevon Mao1, Marisabel Chang2, 1,Santa Margarita Catholic High School 22062 Antonio Pkwy, Rancho Santa Margarita, CA 92688, 2,Computer Science Department, California State Polytechnic University, Pomona, CA 91768
Mass shootings have emerged as a significant threat to public safety, with devastating consequences for communities and individuals affected by such events . However, a lack of widespread use of new technological infrastructure poses significant risk to victims . This paper proposes a system to classify and localize gunshots in reverberant indoor urban conditions, using MFCC features and a Convolutional Neural Network binary classifier . The location information is further relayed to users through a mobile client in real time. We installed a prototype of the system in a high school in Orange County, California and conducted a qualitative evaluation of the approach. Preliminary results show that such a mass shooting response system can effectively improve survivability.
Machine Learning, Public Safety, Acoustics, Directioning.
Puthiyavan Udayakumar, United Arab Emirates
Several domains of our daily lives are rapidly experiencing the Internet of Things, including home appliances, vehicles, industry, education, agriculture, hospitals, environmental monitoring, etc. These domains have reached new peaks and are rapidly growing in popularity. Each aspect of the Internet of things (IoT) has its own marks and milestones, gradually increasing as the technology becomes more advanced and convenient. The Internet of things (IoT) combines various technologies and techniques to create an organized and interconnected world so communication between entities can be done in a better, more efficient, and usable manner. The main characteristics of any technology are its security, privacy, authentication, and trustworthiness for the end users. Security, trust, and confidentiality are crucial in ensuring users satisfaction, and IoT security is chiefly concerned with authentication, confidentiality, and access control. There are several ways to manage IoT security, privacy, and trust, including NFC, RFID, and WSN. However, IoT systems are hampered by a lack of comprehensive security solutions across various vertical application domains due to a lack of comprehensive security solutions. This research paper will focus on the top ten areas that must be secured from a security and privacy standard point for IOT devices to fill this gap.
Sensors/Devices, Data Processing, Network connectivity, Network Protocols, Wireless Networks, Mobile Networks, Viruses, Worms, Trojan, Hardware-based Root of Trust, Small Trusted Computing Base, Defense in Depth, Compartmentalization, Certificate-based Authentication, Renewable Security, Failure Reporting.
Bilel Ben Romdhanne, Mourad Boudia and Nicolas Bondoux, Artificial Intelligence Research, Amadeus SAS, Sophia Antipolis, France
With the development of the cloud offers, we observe a prominent trend of applications being migrated from private infrastructure to the cloud. Depending on the application’s complexity, the migration can be complex and needs to consider several dimensions, such as dependency issues, service continuity, and the service level agreement (SLA). Amadeus, the travel industry leader, had partnered with Microsoft to migrate its IT ecosystem to the Azure cloud. This work addresses the specificity of cloud-to-cloud migration and the multi-cloud constraints. In this paper, we summarise the Amadeus Migration process. The process aims to drive the migration from an initial private cloud environment to a target environment that can be a public or hybrid cloud. Further, the process focuses on a prediction phase that guides the migration process. This paper expects to provide an efficient decision-making process that guides managers and architects to optimise and secure their migration process while considering micro-services-oriented applications targeting an efficient deployment over multi-cloud or hybrid cloud. The prediction relies on the network simulation to predict applications’ behaviour in the cloud and evaluate different scenarios and deployment topologies beforehand. The objective is to predict migrated applications’ behaviour and identify any issue related to the performance, the application’s dependency on other components, or the deployment in the cloud. The migration process proposed in this paper relies on SimGrid, a toolkit developed by INRIA for distributed application modelling. This framework offers a generic process to model IT infrastructure and can assist cloud-to-cloud migration. Specific attention is given to predictive and reactive optimisations. The first results show predictive optimisations impact on securing KPI and reactive optimisation to optimise the solution cost.
Cloud migration, SimGrid, system simulation, app modelling, decision support, cloud deployment strategy.
Xiyang Sun, Yue Zhao and Sheng Zhang, Key Laboratory of Advanced Sensor and Integrated System, Tsinghua Shenzhen, International Graduate School, Tsinghua University, Shenzhen, 518055, China
Convolutional neural networks have been continuously updated in the last decade, requiring more diverse floating-point formats for the supported domain specific architectures. We have presented VARFMA, a tunable-precision fused multiply-add architecture based on the Least Bit Reuse structure. VARFMA optimizes the core operation of convolutional neural networks and supports a range of precision that includes the common floating-point formats used widely in enterprises and research communities today. Compared to the latest standard baseline fused multiply-add unit, VARFMA is generally more energy-efficient in supporting multiple formats, achieving up to 28.93% improvement for LeNet with only an 8.04% increase in area. Our design meets the needs of the IoT for high energy efficiency, acceptable area, and data privacy protection for distributed networks.
Fused Multiply-add, Tunable-precision, Distributed Network, Energy Efficiency, IoT.
Madhusudhan Rao Mulagala1 and Saketha Kusuru2, 1Department of Computer Science and Engineering, Lovely Professional University, Phagwara, India, 2Department of Electrical and Electronics Engineering, Pondicherry Engineering College, Pillaichavady, India
Green computing is illustrated because of the examination and sees of arranging, creating, utilizing, and getting rid of PCs, servers, and related frameworks, for example, screens, printers, stock-piling gadgets, and systems administration and correspondences frameworks in a very efficient and able manner and in such a manner on accomplishing the ideal outcome with least or no effect on the climate. The objective of green distributed computing is to hack back the utilization of risky assets, augment vitality intensity all through the item’s time frame and advance the recyclability and reuse of out-of-date stock items and mechanical plant squander. Green distributed computing is regularly accomplished by utilizing the product long life Resource portion strategies or more virtualization systems or Power the board methods. Force is the bottleneck of rising the framework execution. Force utilization is delivering a major issue because of extreme warmth. As circuit speed expands, power utilization develops. The information focuses on working with computing models that have a few applications that require on-request asset provisioning and designation considering time-changing remaining burdens that square measure statically dispensed bolstered top burden qualities, to continue separation and give execution ensures while not giving much consideration to vitality utilization.
Cloud computing, Performance, Utilization, Green computing.
Jing Zhao and Qianqian Su, Department of Computer Science and Technology, Qingdao University, Qingdao, China
One of scenarios in data-sharing applications is that files are managed by multiple owners, and the list of file owners may change dynamically. However, most existing solutions to this problem rely on trusted third parties and have complicated signature permission processes, resulting in additional overhead. Therefore, we propose a verifiable data-sharing scheme (VDS-DM) that can support dynamic multi-owner scenarios. We introduce a management entity that combines linear secret-sharing technology, multi-owner signature generation, and an aggregation technique to allow multi-owner file sharing. Without the help of trusted third parties, VDS-DM can update file signatures for dynamically changing file owners, which helps save communication overhead. Moreover, users independently verify the integrity of files without resorting to a third party. We analyse the security of VDS-DM through a security game. Finally, we conduct enough simulation experiments and the outcomes of experimental demonstrate the feasibility of VDS-DM.
Security, Data Sharing, Dynamic Multi-Owner, Verification
Nick Kalsi, Fiona Carroll, Kasha Minor, Jon Platts, Cardiff School of Technologies, Cardiff Metropolitan University, Llandaff Campus,Western Avenue, Cardiff, CF52YB
Enhancing the sustainability of the hospitality sector with technology is essential to achieving growth whilst also reducingthe hotel’s impact on the environment. Indeed, the concept of Internet of Things (IoT) has recently gained popularity as a new research topic in a wide variety of industrial disciplines, including the hospitality industry. IoT is being seen and used to transform the hospitality industry for the newly desired sustainable growth. However, it is not all ‘smooth sailing’ as multiple challenges must be addressed by organisations in the hospitality industry when installing IoT. These challenges include cost, security, infrastructure and IoT protocols. Taking into consideration the diversity of IoT applications, the paper will examine IoT’s use in hotels whilst also highlighting the challenges that hotels face when using IoT. In particular, it will cover the effect of cyber security including IoT’s protocol layers, potential monitoring and sensor technologies.
Hotel, Internet of Things (IoT), Sensors, IoT Security, Cloud, compliance, privacy, safety, standard, communication, information.
Shaikha Alhajri, Noura Aljulaidan, Zainab Alramdan, Relam Alkhaldi, Zomord Alshihab, Khaznah Alhajri, Huda Althumali, and Taghreed Balharith, Computer Science Department, College of Science and Humanities, Imam, Abdulrahman Bin Faisal University, P.O.Box 31961, Jubail, Saudi Arabia.
The Internet of Things (IoT) is one of the technology trends nowadays. In addition, the IoT is one means of developing a living life. This paper presents a smart city model based on the IoT using Cisco Packet Tracer simulation software. As a starting point, the paper explains smart city architecture that aims to improve life through three aspects. The first aspect is creating a network that allows users to control their smart devices from anywhere and at any time. The second aspect is bypassing the high budget by improving operational efficiency through the managed interconnection between smart devices within the city. Ultimately, the third aspect is increasing security in all city facilities. The simulation showed that the smart city would make life in cities more productive and interactive.
IoT, Smart City, Cisco, Packet Tracer, Networks