• Open access
  • Published: 02 May 2020

Indoor positioning and wayfinding systems: a survey

  • Jayakanth Kunhoth 1 ,
  • AbdelGhani Karkar 1 ,
  • Somaya Al-Maadeed 1 &
  • Abdulla Al-Ali 1  

Human-centric Computing and Information Sciences volume  10 , Article number:  18 ( 2020 ) Cite this article

47k Accesses

141 Citations

5 Altmetric

Metrics details

Navigation systems help users access unfamiliar environments. Current technological advancements enable users to encapsulate these systems in handheld devices, which effectively increases the popularity of navigation systems and the number of users. In indoor environments, lack of Global Positioning System (GPS) signals and line of sight with orbiting satellites makes navigation more challenging compared to outdoor environments. Radio frequency (RF) signals, computer vision, and sensor-based solutions are more suitable for tracking the users in indoor environments. This article provides a comprehensive summary of evolution in indoor navigation and indoor positioning technologies. In particular, the paper reviews different computer vision-based indoor navigation and positioning systems along with indoor scene recognition methods that can aid the indoor navigation. Navigation and positioning systems that utilize pedestrian dead reckoning (PDR) methods and various communication technologies, such as Wi-Fi, Radio Frequency Identification (RFID) visible light, Bluetooth and ultra-wide band (UWB), are detailed as well. Moreover, this article investigates and contrasts the different navigation systems in each category. Various evaluation criteria for indoor navigation systems are proposed in this work. The article concludes with a brief insight into future directions in indoor positioning and navigation systems.

Introduction

The term ‘navigation’ collectively represent tasks that include tracking the user’s position, planning feasible routes and guiding the user through the routes to reach the desired destination. In the past, considerable number of navigation systems were developed for accessing outdoor and indoor environments. Most of the outdoor navigation systems adopt GPS and Global Navigation Satellite System (GLONASS) to track the user’s position. Important applications of outdoor navigation systems include wayfinding for vehicles, pedestrians, and blind people [ 1 , 2 ]. In indoor environments, the GPS cannot provide fair accuracy in tracking due to nonline of sight issues [ 3 ]. This limitation hinders the implementation of GPS in indoor navigation systems, although it can be solved by using “high-sensitivity GPS receivers or GPS pseudolites” [ 4 ]. However, the cost of implementation can be a barrier to applying this system in real-world scenarios.

Indoor navigation systems have broad number of applications. The certain applications are wayfinding for humans in railway stations, bus stations, shopping malls, museums, airports, and libraries. Visually impaired people also benefit from indoor navigation systems. Unlike outdoor areas, navigation through indoor areas are more difficult. The indoor areas contains different types of obstacles, which increases the difficulty of implementing navigation systems. General block diagram of a human indoor navigation system is illustrated in Fig. 1 .

figure 1

Human indoor navigation system: a general block diagram

A human indoor navigation system mainly consists of the following three modules: (1) Indoor positioning system module, (2) Navigation module, and (3) Human–machine interaction (HMI) module. The indoor positioning system estimates the user’s position, the navigation module calculates routes to the destination from current location of the user, and the HMI module helps the user to interact with the system and provide instructions to the user. Since GPS-based indoor positioning is not effective, methods based on computer vision, PDR, RF signals are utilized for indoor positioning. Figure 2 illustrates the hierarchical classification of indoor navigation systems according to the positioning technologies adopted by them.

figure 2

Hierarchical classification of indoor navigation systems based on adopted positioning technology

Computer vision-based systems employ omnidirectional cameras, 3D cameras or inbuilt smartphone cameras to extract information about indoor environments. Various image processing algorithms, such as Speeded Up Robust Feature (SURF) [ 5 ], Gist features [ 6 ], Scale Invariant Feature Transform (SIFT) [ 7 ], etc., have been utilized for feature extraction and matching. Along with feature extraction algorithms, clustering and matching algorithms are also adopted in conventional approaches for vision-based positioning and navigation systems. Apart from conventional approaches, computer vision based navigation systems utilized deep learning methodologies in recent years. Deep learning models contains multiple processing layers to study the features of input data without an explicit feature engineering process [ 8 ]. Thus, deep learning-based approaches have been distinguished among object detection and classification methods. Egomotion-based position estimation methods are also utilized in computer vision-based navigation systems [ 9 ]. Egomotion approach estimates the camera’s position with respect to the surrounding environment.

PDR methods estimate the user’s position based on past positions by utilizing data from accelerometers, gyroscopes, magnetometers, etc. The user’s position is calculated by combining the step length, the number of steps and the heading angle of the user [ 10 , 11 ]. Since a greater number of position errors occur in dead reckoning approaches due to drift [ 12 ], most of latest navigation systems integrate other positioning technologies with PDR or introduced some sensor data fusion methods to reduce the errors.

Communication-based technologies for indoor positioning includes RFID, Wi-Fi, visible light communication (VLC), UWB and Bluetooth. RFID systems consist of a RFID reader and RFID tags attached to the objects. There exist two types of RFID tags, namely, active and passive. Most of the recent RFID-based navigation systems have implemented passive tags since an external power source is not required. RFID-based systems utilize Received signal strength (RSS), Angle of arrival (AOA), Time of arrival (TOA) and Time difference of arrival (TDOA) for position estimation [ 13 ]. In indoor environments, however, all the methods except RSS may fail to estimate the user’s position accurately due to nonline of sight scenarios. The popular RSS-based positioning approaches are trilateration and fingerprinting [ 14 ]. RFID technology are widely implemented in navigation systems because of their simplicity, cost efficiency, and long effective ranges. Wi-Fi-based approaches are implemented in indoor environments, where we have sufficient numbers of Wi-Fi access points, and a dedicated infrastructure is not required; instead, these approaches can utilize existing building infrastructure because most current buildings will be equipped with Wi-Fi access points. Wi-Fi-based indoor navigation systems make use of RSS fingerprinting or triangulation or trilateration methods for positioning [ 15 ]. Bluetooth-based systems have almost similar accuracy as Wi-Fi-based systems and use Bluetooth low energy (BLE) beacons as source of RF signals to track the positions of users using proximity sensing approaches or RSSI fingerprinting [ 16 ]. In recent advances, smartphones are usually used as a receiver for both Bluetooth and Wi-Fi signals. VLC-based systems utilize the existing LED or fluorescent lamps within buildings, which makes VLC-based systems low cost. These LEDs or fluorescent lamps are becoming ubiquitous in indoor areas. The light emitted by lamps is detected using smartphone cameras or an independent photo detector. TOA, AOA, and TDOA are the most popular measuring methods used in VLC-based positioning systems [ 17 ]. UWB-based positioning systems can provide centimeter-level accuracy, which is far better than Wi-Fi-based or Bluetooth-based methods. UWB uses TOA, AOA, TDOA, and RSS-based methods for position estimation [ 18 ]. Comparison of various indoor positioning technologies in terms of accuracy, cost of implementation and power consumption are shown in Fig. 3 .

figure 3

Indoor positioning technologies: a comparison

The navigation module will determine the route of the user in the constructed indoor map with respect to user’s current position. The navigation module mainly consists of a map which represent the areas of indoor environment and a method to plan the navigation routes. The most commonly used methods for route planning are A* algorithm [ 19 ], Dijkstra’s algorithm [ 20 ], D* algorithm [ 21 ] and Floyd’s algorithm [ 22 ]. In addition, there exist some systems that provide mapless navigation. All these systems are discussed in the upcoming sections.

Human–machine interaction module allows the users to communicate with the navigation system such as to set up the destination as well as change the destination. The HMI module gives proper information and guidance to the users regarding route and location by means of acoustic feedback [ 23 ] or haptic feedback [ 24 ]. In the case of visually impaired ones, audio or vibration feedback is widely implemented in the HMI module.

In the past, significant number of attractive surveys about various indoor positioning technologies and indoor navigation systems were published [ 16 , 17 , 18 , 25 , 26 , 27 , 28 ]. Most of these surveys mainly concentrated on positioning systems rather than the navigation system. In addition, they considered only a single technology such as wireless-based systems or visible light-based system or vision-based systems. In this work, we provide a summary of recent advancements and developments in the field of indoor navigation and positioning systems that utilize different types of approaches, such as computer vision, sensors, RF signals, and visible lights. The survey primarily deals with human navigation systems, including assisted systems for people with visual impairments (VI). In addition, some robotic navigation systems are also detailed in this paper.

Indoor positioning and wayfinding systems

Computer vision-based navigation and wayfinding systems.

One of the main applications of indoor navigation is wayfinding for people with VI. ISANA [ 29 ] is a vision-based navigation system for visually impaired individuals in indoor environments. The proposed system prototype contains a Google tango mobile device, a smart cane with keypad and two vibration motors. The Google tango device has a RGB-D camera, a wide-angle camera, and 9 axes inertial measurement unit (IMU). The key contributions of ISANA are: (1) an “indoor map editor” to create semantic map of indoor areas, (2) “obstacle detection and avoidance method” that aids real-time path planning and (3) a Smart Cane called “CCNY Smart cane” that can alleviate issues associated with voice recognition software. The geometric entities in the floors, such as lines, text, polygons, and ellipses, were extracted by the indoor map editor from the input CAD model of the indoor areas. The indoor map editor can recognize the locations of rooms, doors, hallways, spatial and geometrical relationships between room labels, and global 2-dimensional traversal grid map layers. Prim’s minimum spanning tree algorithm is employed to draw out the above-mentioned semantic information. A novel map alignment algorithm to localize the users in the semantic map is proposed in ISANA. The proposed map alignment algorithm utilizes the 6-DOF pose estimation and area descriptive file provided by Google Tango VPS. The navigation module utilizes the global navigation graph constructed from the 2-dimensional grid map layer along with the A* algorithm for path planning. The safety of visually impaired individuals is guaranteed via obstacle detection, motion estimation and avoidance methods introduced in ISANA. To detect the obstacles, the ISANA will make use of the RGB-D camera to acquire depth data. The 3-dimensional point cloud or the depth data are rasterized and subjected to a denoising filter to remove the outliers. Three-dimensional points are aligned with the horizontal plane by utilizing the deskewing process. A 2-dimensional projection-based approach is introduced to avoid the obstacle, and it produces a time stamped horizontal map for path planning and time stamped vertical map for obstacle alerts. A connected component labeling approach-based algorithm [ 30 ] is adapted to detect the object to create horizontal and vertical maps. The Kalman filter is employed to reckon the motion of obstacles based on time-stamped maps. ISANA uses an Android text to speech library to speak out the instructions and feedback to the users and a speech to text module [ 31 ] to recognize user’s voice inputs. The CCNY smart cane provides haptic feedback to the user in noisy environments, and it also has a keypad to set the destinations and IMU to track the orientation of user and cane.

Tian et al. [ 32 ] developed a system for helping blind persons navigate indoor environments. The proposed system consist of door detector module and text recognition module. The separate module for the door detector consisted of the Canny edge detector and curvature property-based corner detector. The relative positions were detected by measuring the angle between the top edge of the door and the horizontal axis. A mean shift-based clustering algorithm was adopted for enhancing the text extraction by grouping similar pixels. A text localization model was designed by considering that texts have shapes with closed boundaries and a maximum of two holes. Text recognition was achieved by using Tesseract and Omni Page optical character recognition (OCR) software. The demonstrated results show that the false positive rate increased for the images acquired under challenging conditions, such as low light, partial occlusion, etc.

A wearable navigation system for people with VI by utilizing a RGB-D camera was proposed in [ 33 ], and it used sparse feature and dense point clouds for estimating camera motions. The position and orientation of the objects in the indoor environment were identified using a corner-based real-time motion estimator algorithm [ 34 ] and along with that, an iterative closest point algorithm was included to prevent drift and errors in pose estimation. A simultaneous localization and mapping (SLAM) algorithm provided the mapping [ 35 , 36 ]. The modified D* lite algorithm helped the user route through the shortest path. Although the D* algorithm can handle dynamic changes in the surroundings, narrow changes in the map can cause the change in the produced walking path. This issue will make the navigation of people with VI more complicate. Normal D* algorithm generates the shortest path as a set of cells on the grid map. And it connects the current location and final destination by excluding untraversable cells. In this work, instead of directly following the generated set of cells, a valid waypoint point is generated in such a way that waypoint should be traversable as well as it should be located near to the obstacles at some distance. In the valid waypoint generation method, a point that is most far, visible as well as traversable from the current location is selected from the set of cells generated by the D* algorithm. Also, another point is selected, which is far, visible as well as traversable from the first selected point. Finally, a cell near to the first selected point and with less cost function is computed. However, some of the maps were inconsistent because the map merging technique was unable to correct for deformations in the merged maps.

The indoor wayfinding system for people with VI in [ 37 ] utilized Google Glass and Android phone. The proposed object detection method used the Canny edge detector and Hough line transform. Since walls may be one of the main obstacles in indoor environments, the floor detection algorithm identified the presence of walls by finding the stature of the floor region. However, the proposed object detection method failed for bulletin boards as well as indoor low contrast wall pixels.

In [ 38 ], the Continuous adaptive mean (CAM) shift algorithm was implemented with the D* algorithm for helping blind people navigate in indoor areas. The proposed method used image subtraction for object detection and histogram backpropagation for creating a color histogram of detected objects. The CAM shift algorithm provided tracking and localization of the users, and the D* algorithm helped the user calculate the shortest route between the source and destination.

Bai et al. [ 39 ] developed a vision-based navigation system for people with VI by utilizing a cloud-computing platform. The proposed prototype is made up of a stereo camera mounted on a helmet, smartphone, web application, and cloud platform. The helmet also contains a speaker and earphones to facilitate the human–machine interaction. The stereo camera acquires all the information about the surroundings and forwards it to the smartphone using Bluetooth. The smartphone will act as a bridge between users and cloud platform. All the core activities of the system, such as object or obstacle detection, recognition, speech processing, navigation route planning, is performed at the cloud platform. The cloud platform contains three modules, namely, speech processing, perception, and navigation. The speech processing module implemented a recurrent neural network-based natural language processing algorithm [ 40 , 41 ] to analyze the user’s voice commands. The perception module makes the user aware of his surroundings and aid the blind people to live as a normal person, and it fuses object detection and recognition functions [ 42 ], scene parsing functions [ 43 , 44 ], OCR [ 45 , 46 ], currency recognition functions [ 47 ] and traffic light recognition functions [ 48 ] to improve the blind user’s awareness about the environment. All the functionalities in the perception module are based on deep learning algorithms. The navigation module implements a vision-based SLAM algorithm to construct the map. The SLAM algorithm will extract the image features of the surrounding environment and recreate the path of the camera’s motion. Preassigned sighted people use web application to provide additional support for blinds in complex scenarios.

Athira et al. [ 49 ] proposed an indoor navigation system for shopping mall. The proposed vision based system used GIST feature descriptor, and it enhanced the processing of captured images and reduced memory requirements. The main functions of the proposed system are keyframe extraction, topological map creation, localization, and routing. Keyframes are the important frames extracted from walkthrough videos that are used to create a topological map. For each frame, the L2 norm between two descriptors is calculated. If the L2 norm (Euclidean distance) exceeds a specific value, the frame is considered as “keyframe”. Consequently, the direction of key frames is detected by analyzing present frames with left and right parts of prior frames individually to create the map. Once the direction of keyframes is detected, 2D points are calculated in the map. For localization, images captured from user’s current position is compared with existing keyframes using the L2 norm.

Bookmark [ 50 ] is an infrastructure-free indoor navigation system that utilize existing barcodes of books in a library, and it facilitates the navigation of library visitors to any book’s position just by scanning the barcode of books in the library. Bookmark was developed as an application that can be used in any phone that has a camera, and it provides a detailed map of the library to the user. The detailed map contains locations of stairs, elevators, doors, exits, obstacles (pillars or interior walls) and each bookcase. Then, the map is converted to scalable vector graphics format, and the locations are represented by different color codes. To map the books in the library with the map, a book database of call numbers (a unique alphanumeric identifier associated with each book) and locations associated with call numbers is created. When a user scans the barcode of a book to know the location, the Bookmark’s server-side will collect information about the book from an existing library API. This information will contain details of the book, including the call number. The system will look up the call number inside the book database to retrieve the location for the user. Bookmark implements the A* algorithm to plan the route between two points of interest. Since Bookmark does not use a positioning technique, the system will be unaware about the current position of the user until he/she completes the navigation or until the next barcode is scanned. The major limitations of the system include the absence of barcodes on books and the misplacement of books on the wrong shelves.

Li et al. [ 51 ] proposed a wearable virtual usher to aid users in finding routes in indoor environments, and it consists of a wearable camera that captures pictures for frontal scenes, headphones to listen to verbal routing instructions to reach a specific destination, and a personal computer. The aim of the system is to aid users in wayfinding in an indoor environment using egocentric visual perception. A hierarchical contextual structure composed of interconnected nodes uses cognitive knowledge to estimate the route. The hierarchical structure can be presented in the following three levels: (1) top level, where the root node represents the building itself; (2) zones and areas inside the building; and (3) bottom level, which corresponds to the location inside each area. Generally, the structure illustrates the human mental model and the understanding of an indoor environment. SIFT has been used for scene recognition. A “self-adaptive dynamic-Bayesian network” is developed to find the best navigation route, and it is self-adaptive and can modify its parameters according to the current visual frame. Moreover, this network can address uncertainties in perception and is able to predict relevant routes. The obtained results demonstrated that the developed system is capable of assisting users to reach their destination without requiring concentration and a complex understanding about the map.

ViNav [ 52 ] is a vision based indoor navigation system developed for smartphones. The proposed system provides indoor mapping, localization and navigation solutions by utilizing the visual data as well as data from smartphone’s inbuilt IMUs. ViNav system is designed as a client–server architecture. The client is responsible for collecting the visual imageries (images and videos) and data from sensors including accelerometer, barometer, gyroscope, etc. The server will receive these data from the client and build 3-dimensional models from that. The server comprises of two modules. The first module is responsible for building 3-dimensional models of the indoor environment. Structure from motion technique is used to build 3-dimensional models from crowdsourced imageries captured by the client. The data from the accelerometer as well as gyroscope are utilized to detect trajectories of the user. Moreover, Wi-Fi fingerprints collected from the path traveled by the user are combined with the 3d model for localizing the user’s position in the indoor area. The second module facilitates the navigation of the user by calculating the navigation routes by using pathfinding algorithms. The data about the obstacles in the path are retrieved from constructed 3D models and navigation meshes are computed by adding pedestrian’s traveling path retrieved from crowdsourced user’s paths with obstacle’s data. Barometer readings are utilized to detect the stairs, elevators, and change of floors. The performance evaluation experiments demonstrate that ViNav can locate users within 2 s with an error, not more than 1 m.

Rahman et al. [ 53 ] proposed a vision-based navigation system using the smartphone. The proposed system is designed in a manner where the smartphone camera is enabled to capture the images in front of the user. The captured images will be compared with pre-stored images to check whether the captured image contains any obstacles. An algorithm is proposed for assisting people with visual impairments. The algorithm performs both obstacle detection as well as pathfinding tasks for the user. Once an image is captured by the smartphone, the obstacle detection technique will initially extract the region of interest from the image. The extracted region of interest will be compared with images in the database. If an obstacle is detected, the pathfinding technique will suggest an alternate path for the user. It is achieved by checking the right and left of the extracted region of interest. In a test environment proposed system achieved an accuracy of 90%.

Reference [ 54 ] examined the performance of three indoor navigation systems that utilize different techniques for guiding people with visual impairments in the indoor environment. The proposed work focused on the development of three navigation systems that utilize image matching, QR code, and BLE beacons respectively for localizing the user and testing of the developed navigation system in the realtime indoor environment. Image matching based indoor navigation system included a novel CNN model that is trained with thousands of images to identify the indoor locations. QR code-based system utilized existing QR code methods such as Zxing and Zbar. BLE beacons based method adopted a commercially available indoor positioning SDK to localize the user in indoor areas. All three navigation systems are implemented in a smartphone for real-time evaluation. Evaluation results show that QR code and image matching based methods outperformed the BLE beacons based navigation system for people with visual impairments in the indoor environment.

Tyukin et al. [ 55 ] proposed an indoor navigation system for autonomous mobile robots. The proposed system utilizes an image processing-based approach to navigate the robot in indoor areas. The system consists of “a simple monocular TV camera” and “color beacons”. The color beacons are the passive device that has three areas with different colors. All these colors can be visually identified, and the surface of the beacons are matte and not glowing. The operation of the proposed system can be classified into the three steps: (1) detection of the color beacons; (2) relative map generation, which identifies the location of the detected beacons in the indoor space with respect to the TV camera; and (3) identification of robot coordinates in the absolute map. An algorithm that contains different image processing techniques was introduced for beacon detection. Initially, the image from the TV camera was subjected to noise removal and smoothing of image defects using a Gaussian filter. After preprocessing, the image is converted to HSV. Then, the algorithm will choose each color in order, and a smooth continuous function is applied for the classification of pixels. Color mask images are generated by averaging the grayscale images from each HSV channel. Finally, the algorithm will recognize the pixel with the maximum intensity in the color mask and will fill pixels around it. The algorithm will repeat this step until all colors used in the beacons are processed. Once the center of the colored areas of the beacon is identified, the magnitude and direction of the vectors connecting the center of the colored area are estimated. There will be two vectors, with one connecting the center of the first and second colored area and the other connecting the center of the second and third colored area. The differences between these two vectors are used to identify the beacons. A navigation algorithm is introduced to estimate the coordinates of the beacon’s location and absolute coordinates of the TV camera. The relative coordinates of beacons were estimated using the beacon height and the aperture angle of the lens. The created relative map is an image where the relative positions of beacons and colored beacons are represented as dots. A three-dimensional transformation is applied to the relative coordinates of beacons to create the absolute map. The demonstrated results of the experiment using the proposed system show that the detection algorithm is able to detect the beacons only if it is within a range of 1.8 m from the TV camera. The average deviation in the calculated absolute coordinates was only 5 mm from the original value.

Bista et al. [ 56 ] proposed a vision-based navigation method for robots in indoor environments. The whole navigation process depends on 2-dimensional data of the images instead of 3-dimensional data of images utilized in existing methods. They depicted the indoor area as a collection of reference images that were obtained during the earlier learning stage. The proposed method enables the robot to navigate through the learned route with the help of a 2-dimensional line segment detector. To detect the line segment in the acquired images, a highly accurate and quick line detector called EDLine detectors [ 57 ] is employed. The indoor maps were created by utilizing key images and its line segments. During map construction, the first acquired image will be considered as a key image. The line segment of the first key image will be matched with the next image’s (second image) line segment to form a set of line segments. For matching line segments, a Line Band descriptor-based matching method was adopted [ 58 ]. Matching will be mainly based on Line Band descriptor, followed by the application of geometric constraints and filters to remove false matches. Once the matched set of the key image and the next image is obtained, the method will consider the next image (third image) and perform the same line segment matching between the first acquired key image and the current image. These steps will result in two-matched set of lines. A trifocal tensor is utilized to find the two-view matches between these two sets. The trifocal sensor is a “ \(3\times 3\) array that contains all the geometric relationships among three views”, and it needs three-view correspondence between lines. Two-view matches (matching of the current image with the previous and next key image) utilized for initial localization and three-view correspondence generation. Three-view matches were used for mapping (matching the current image with previous, next and second next key image). The previous and next key images of a currently acquired image will share some line segment, which facilitates robot navigation and motion control. The rotational velocity of the robot is also derived from the three-view matches. The proposed navigation method was evaluated in three different indoor areas. The obtained drift in the navigation path of the robot was only 3 cm and 5 cm for the first two experiments. In the third experiment, a large drift was present in the path of the robot during the circular turn. The inclusion of obstacle avoidance module will be considered in future work to deal with the dynamic objects in the indoor environment.

Table 1 illustrates the comparison of computer vision-based indoor navigation systems.

Computer vision-based positioning and localization systems

The tasks of indoor localization, positioning, scene recognition and detection of specific objects, such as doors, were also considered in the context of indoor navigation since they can be extended for wayfinding in indoor areas.

Tian et al. [ 59 ] developed a method to detect doors for assisting people with VI to access unfamiliar indoor areas. The proposed prototype consists of a miniature camera mounted on the head for capturing the image and a computer to provide the speech output following the object detection algorithm. A “generic geometric door model” built on stable edge and corner features facilitates door detection. Objects with similar shapes and sizes, such as bookshelves and cabinets, were separated from the door using additional geographic information. The presented results indicate a true positive rate of 91.9%.

The Blavigator project included a computer vision module [ 60 ] for assisting blind people in both indoor and outdoor areas. The proposed object collision detection algorithm uses a “2D Ensemble Empirical Mode Decomposition image optimization algorithm” and a “two-layer disparity image segmentation algorithm” to identify adjacent objects. Two area of interests are defined near the user to guarantee their safety. Here, depth information at 1 m and 2 m are analyzed for retrieving information about the obstacles in the path from two distances.

An omnidirectional wearable system [ 61 ] for locating and guiding the individual in an indoor environment combined GIST and SURF for feature extraction. Two-levels of topological classification are defined in this system, namely, global and local. The global classification will consider all images as references, whereas the local classification will be based on prior knowledge. A visual odometry module was developed by integrating extended Kalman filter monocular SLAM and omnidirectional sensors. The system was trained using 20,950 omnidirectional images and tested on 7027 images. Localization errors were present due to misclassified clusters.

Huang et al. [ 62 ] developed an indoor positioning system called 3DLoc, which is a 3D feature-based indoor positioning system that can operate on handheld smart devices to locate the user in real time. This system solves the limitation that exists in previous indoor navigation systems based on sensors and feature matching (e.g., SIFT and SURF), and it considers the 3D signature of pictures of places to recognize them with high accuracy. An algorithm to obtain the signatures from pictures has been proposed. The algorithm is capable of robustly decoding those signatures to identify the location. At the first stage, 3D features are extracted from the captured pictures. Therefore, a 3D model is constructed using the obtained features using the indoor geometry reasoning [ 63 ]. Pattern recognition is then performed to identify the 3D model. The authors proposed a K-locations algorithm to identify the accurate location. An augmented particle filter method is used if the captured images are insufficient for recognizing the location due to information loss. Inertial sensors of the mobile device are used to provide real-time navigation of users under motion. Based on the conducted experiments, 90% of the exposed errors are within 25 cm and 2° for location and orientation, respectively.

iNavigation [ 64 ] combines SIFT feature extraction and an approximately nearest neighbor algorithm called ‘K-d tree based on the best bin’ first for positioning from the ordinary sequential images. Inverse perspective matching was used for finding the distance when an image was queried by the user. Dijkstra’s algorithm was implemented for routing through the shortest path. In this method, locations of landmark images were manually assigned. Therefore, further expansion of landmark image datasets requires a considerable amount of manual work.

Image processing-based indoor localization method [ 65 ] for indoor navigation utilizes the principal component analysis (PCA)-SIFT [ 66 ] feature extraction mechanism to reduce the overall running time of the system compared to that of SIFT- or SURF-based methods. It also implemented a Euclidean distance-based locality sensitive hashing technique for rapid matching of the images. The precision of the system increased up to 91.1% via the introduction of a confidence measure.

The localization algorithm [ 67 ] for indoor navigation apps consists of an image edge detection module using a Canny edge detector and text recognition module using stroke width transform, Tesseract, and ABBY fine reader OCRs. Tesseract is a free OCR software that supports various operating systems, and its development has been sponsored by Google. Tesseract can support the recognition of texts in more than 100 languages, including the languages written from right to left, such as Arabic. The ABBY fine reader OCR is developed by “ABBY”, a Russia-based company, and it supports approximately 192 languages. Further, its latest version is able to convert texts in the image files to various electronic documents, such as PDF, Microsoft Word, Excel, Power Point, etc. The experimental results proved that ABBY is quick and has high recognition accuracy on a benchmark dataset used in research on OCR and information retrieval.

Xiao et al. [ 68 ] proposed a computer vision-based indoor positioning system for large indoor areas using smartphones. The system makes use of static objects in the indoor areas (doors and windows) as the reference for estimating the position of the user. The proposed system contains mainly two processes as follows: (a) static object recognition and (b) position estimation. In the static object recognition process, initially, the static object is detected and identified by implementing the Fast-RCCN algorithm [ 69 ]. The included deep learning network is similar to VGG16 network [ 70 ]. The pixel coordinate of the “control points” (physical feature points on the static object) in the image is used for position calculation of the smartphone. The pixel coordinates of “control points” were calculated by analyzing the test image and identified reference image. The SIFT feature detector is adapted for the extraction of feature points from both the test and reference images. A homographic matrix is constructed from the matching feature point pairs of test and reference images. This homographic matrix and “control point” of reference images are utilized to find the “control point” of the test image. The collinear equation model of the “control point” in the image and “control point” in the space is calculated for the position estimation of the smartphone. The results show that the system has achieved an accuracy within 1 m for position estimation.

A visual indoor positioning system that makes use of a CNN-based image retrieval method was proposed in [ 71 ]. The system database contains images for each scene, and its CNN features, absolute coordinates and quaternion are provided with respect to a given local coordinate system and scene labels. In the offline phase, the CNN features of the images related to each scene were extracted using the pretrained deep learning VGG16 network. The proposed system consists of the following two online phases: (1) image retrieval task based on CNN and (2) pose estimation task. During the image retrieval phase, the CNN will retrieve most similar images (two images) with respect to the query image. In the pose estimation phase, the “Oriented Fast and Rotated Brief (ORB)” [ 72 ] feature detector is adapted for feature extraction of three images (test image and retrieved most two identical images). The feature points of the test image are matched with each similar image using the Hamming distance. The scale of the monocular vision is calculated from the pose of the two identical images and transformation of matches between pairs of the test image and identical image. The position and orientation of the test image is calculated by utilizing the monocular vision and transformation between the test image and the identical image. Images from two benchmark datasets, the ICL-NUIM dataset [ 73 ] and the TUM RGB-D dataset [ 74 ], were used for system evaluation. The average error in pose estimation using ICL-NUIM and TUM RGB-D was 0.34 m, 3.430 and 0.32 m, 5.580, respectively. In the ICL-NUIM dataset, the proposed system exhibited less localization error compared to PoseNet [ 75 ], 4D PostNet and a RGB-D camera pose estimation method that combines a CNN and LSTM recurrent network [ 76 ].

PoseNet is a 6 DOF camera relocalization system for indoor and outdoor environments using a deep learning network. The PoseNet used a 23 convolutional layer model that is similar to GoogLeNet [ 77 ] for classification. Caffe library was utilized for implementing the PoseNet model.

A considerable number of elderly people may fall and become injured because of aging. In this scope, a smartphone-based floor detection module for structured and unstructured environments that enables the identification of floors in front of the user is proposed in [ 78 ]. Structured environments are the areas that have a well-defined shape, and unstructured environments are the area with the unknown shape. In unstructured environments, superpixel segmentation was implemented for floor location estimation task. Superpixel segmentation will generate clusters of pixels, and they are then reshaped based on their color surroundings. For the structured environment, the Hough transform is used for line detection and the floor-wall boundary is represented by a polygon of connected lines. The results demonstrate that the system achieved an accuracy of 87.6% for unstructured environments and 93% for structured environments.

Stairs, doors, and signs are the common objects that can be used as reference points to guide people with visual impairments in indoor areas. Bashiri et al. [ 79 ] proposed an assistive system to guide people with visual impairments in indoor areas. The proposed system consists of two modules; a client mobile devices to capture the images and a processor server to detect the objects in the image. A CNN model was utilized to recognize indoor objects such as stairs, doors, and signs to assist people with visual impairments. The transfer learning technique was leveraged to build the object recognition CNN model. A popular CNN model AlexNet was utilized for the transfer learning method to create the new CNN model. The developed CNN model has evaluated in MCindoor 20000 dataset [ 80 ] and achieved recognition accuracy of more than 98%.

Jayakanth [ 81 ] examined the effectiveness of texture features and deep CNNs for indoor object recognition to assist people with visual impairments in indoor environments. The performance of three texture features LPQ, LBP, BSIF and CNN model built by the transfer learning approach using a pre-trained GoogleNet model was evaluated in this work. All of the proposed methods were evaluated in MCindoor 20000 dataset. Obtained results show that the CNN model built by the transfer learning approach using a pre-trained GoogleNet model achieved recognition accuracy of 100%. Although LPQ computation doesn’t require any high-performance computing tools like what CNN computation required, the LPQ feature descriptor displayed a similar performance compared to CNN for indoor object recognition.

Afif et al. [ 82 ] extended a famous deep convolutional neural network called RetinaNet for indoor object detection to assist the navigation of people with visual impairments in indoor areas. The proposed object detection network is comprised of a backbone network and a pair of sub-networks. Among two sub-networks, the first network will perform object classification and the second network will extract the bounding box as well as the class name of objects. Feature pyramid networks are used as a backbone of the proposed detection network. Feature pyramid network-based architecture can detect objects on various scales which improves the performance of multi scales predictions. Evaluation of the proposed object detection network was carried out in a custom dataset which contains 8000 images and 16 different indoor landmark objects. During the evaluation of the proposed detector, different backbone network architectures such as ResNet, DenseNet, VGGNet have been experimented with RetinaNet. RetinaNet with ResNet network outperformed all other combinations and achieved a mean average precision of 84.16%.

An object recognition method [ 83 ] for indoor navigation of robots was developed using a SURF-based feature extractor and bag-of-words feature vectors using Support Vector Machine (SVM) classifier. The nearest neighbor algorithm or RANSAC algorithm enabled feature vector matching. The proposed method was not able to recognize multiple objects in a single frame.

Table 2 presents a comparison of computer vision-based indoor positioning, indoor localization and indoor scene recognition systems.

Communication technology based indoor positioning and wayfinding systems

Communication technology-based positioning systems make use of various approaches to measure the signals from respective signal transmitting devices (Wi-Fi access point, BLE beacon etc.) installed in the indoor environments. The commonly used methods are time-based methods, angle-based methods, and RSS-based methods [ 84 ]. The time-based measurements include TOA and TDOA. The TOA approach utilizes the time taken for the signal propagation between the transmitter and receiver to find the range of the user, while the TDOA approach uses the difference of transmission time for two signals that have a different velocity. The angle-based method ‘AOA’ makes use of the angle of arrival at the target node to estimate target direction. The AOA measurement technique is rarely used in an indoor environment due to non-line of sight issues [ 85 ]. AOA and TOA based indoor localization approach are shown in Figs. 4 and 5 respectively.

figure 4

AOA based indoor localization approach

figure 5

TOA based indoor localization approach

TDOA method computes the difference between the TOA of the signals from two distinct RF transmitters to the mobile device. A TDOA value geometrically represents a hyperbola as shown in the figure. When there is more than one TDOA value, the intersection point of hyperbolas is estimated as the position of the mobile device. Figure 6 illustrate the TDOA based indoor localization approach.

figure 6

TDOA based indoor localization approach

Lateration, angulation, proximity, and radio fingerprinting are the main techniques used in communication technology-based systems for position estimation. The lateration technique calculates the distance between the receiver device and cluster of transmitting devices (access points, tags or beacons) that are attached in predefined locations. The angulation technique is similar to the lateration technique but considers the angle or phase difference between the sender and receiver instead of distance for position estimation [ 86 ].

The proximity technique is based on the proximity of the receiver to recently known locations. Compared with lateration and angulation, the proximity technique can provide a rough location or set of possible locations. The radio fingerprinting approach is an entirely different approach compared to the other techniques and does not consider the distance, angle or nearness between sender and receiver. Instead, a pattern matching procedure is applied, where the RSS or other signal properties at a location will be compared with the RSS for different locations stored in the database [ 87 ]. General steps involved in RSS fingerprint-based localization system are explained in Fig. 7 . For pattern matching, different types of algorithms including Euclidean distance, machine learning algorithms such as KNN, SVM, etc are used in the literature.

figure 7

RSS fingerprinting based indoor localization system

FreeNavi [ 88 ] is a mapless indoor navigation system that relies on the Wi-Fi fingerprints of each landmark’s entrance in the indoor environment. Along with Wi-Fi fingerprints, walking traces of the users between two landmarks was utilized for creating virtual maps of the indoor environment. A lowest common subsequence (LCS) algorithm [ 89 ] that finds similarities between Wi-Fi fingerprints was adopted for virtual map creation as well as indoor localization. The LCS algorithm was developed to tolerate access point (AP) changes in regions where the concentration of the APs are high. To provide reliable navigation of users, two route planning algorithms were introduced in this system. One was for finding the shortest path between two landmarks while the other was for finding the most frequently traveled route. Both of the abovementioned algorithms were implemented using Floyd’s shortest path algorithms. FreeNavi was evaluated in a shopping center environment in Beijing by collecting the fingerprints of 23 landmarks and a total of 1200 m of traces. The virtual maps have a maximum accuracy of 91%, although an 11.9% error step rate was found in navigation because the user have to guess the travelling direction in junctions.

A Wi-Fi fingerprinting-based navigation system was proposed in [ 90 ]. The proposed system makes use of Wi-Fi fingerprinting combined with a radio path loss model for the estimation of locations. The position estimation algorithm was based on particle filter and K nearest neighbor (K-NN) algorithms. Dijkstra’s algorithm was implemented for the shortest path calculation between the source and destination. These authors also examined the performance of three fingerprint matching algorithms, namely, Kalman filter, unscented Kalman filter and K-NN. The results showed the average error while using each algorithm and the values were similar at approximately 1.6 m. However, K-NN had the greatest maximum error.

In a Wi-Fi-based indoor navigation system, the fluctuations in RSS can result in unfair positioning accuracy. To overcome these issues, a fingerprint spatial gradient (FSG) was introduced in [ 91 ]. The proposed method makes use of the spatial relationship of RSS fingerprints between nearby numerous locations. For profiling the FSG, these authors introduced an algorithm that picks a group of nearby fingerprints that advance the spatial stability as well as fingerprint likeness. A pattern matching approach is used for comparing the stored FSG and queried FSG using similarity measures, such as the cumulative angle function, cosine similarity or discrete Fréchet distance. The average accuracy of the position estimation was between 3 and 4 m.

In Wi-Fi-based indoor positioning and navigation systems, the radio fingerprinting approach has been used widely for estimating the position of the RF signal receiver. The fingerprinting approach follows a pattern matching technique where the property of the currently received signals is compared with the properties of the signal stored during the offline or training phase. In the last 10 years, various machine learning algorithms such as SVM [ 92 ], KNN [ 93 ], neural networks [ 92 ] have been utilized for pattern matching in radio fingerprint-based indoor localization methods. Compared to traditional machine learning algorithms, deep learning algorithms such as CNN, RNN, etc have demonstrated their effectiveness in various tasks such as image classification, text recognition, intrusion detection, etc. In this context, in recent years deep neural network-based approaches [ 94 , 95 ] have been used in fingerprint-based indoor localization systems.

Jin-Woo et al. [ 96 ] proposed an indoor localization system that utilizes CNN for the Wi-Fi fingerprinting task. Since the fluctuations in RSS and multipath issues can cause errors in location estimation, training with few data can lead to the development of ineffective models. The proposed method utilized 2-D radio maps as inputs to train the CNN model. The 2-D virtual map for the input has been created from the 1-D signals. The developed deep CNN architecture consists of four convolutional layers, two max-pooling layers, and two fully connected layers. Even though It is a lite deep CNN model it has outperformed all other deep neural network-based methods proposed before that and achieved a mean accuracy of 95.41%. Since 2-D radio maps are used for training the deep CNN, it can learn signal strength and topology of radio maps. This approach makes the proposed system robust to the small RSS fluctuations.

Mittal et al. [ 97 ] have adapted CNN for Wi-Fi based indoor localization system for mobile devices. The proposed work presents a novel technique for Wi-Fi signature to image transformation and a CNN framework namely CNN-LOC. Instead of training with the available dataset, they have constructed their database by collecting RSSI data from a test environment. One of the novelties of the proposed work is the conversion of RSSI data to image data. For each location, the collected RSSI data are converted to the grayscale image using the Hadamard product method. Similar to [ 14 ], this work has used a lite deep CNN model which comprises of five CNN layers. To improve the scalability of the system, CNN-LOC is integrated with a hierarchical classifier. Hierarchical classifiers are used to scale up the lite or small CNN architecture for larger problems. The proposed hierarchical classifier consists of three layers where the first layer is used to find the floor number, the second layer for detecting the corridor and the third layer for estimating the location of the mobile device. The system has been tested in 3 indoor paths extended over 90 m. The obtained results show that the average localization error was less than 2 m.

Ibrahim et al. [ 98 ] proposed an advanced approach to improve the localization accuracy by reducing randomness and noise found in RSS values. The time series of RSS values are applied to CNN as input. The hierarchical architecture of CNN was employed for predicting the fine-grained location of the user. The first layer is responsible for detecting the building, second and third layers will predict floor number and location of the user respectively. The proposed CNN model was evaluated in the UJIIndoorLoc dataset. The dataset consists of the Wi-Fi RSS fingerprints collected from multiple-multi storied buildings. Demonstrated results show that the proposed hierarchical CNN predicts the building and floor with an accuracy of 100%. The average error in localization is 2.77 m which acceptable in the case of Wi-Fi-based systems.

Li et al. [ 99 ] proposed a multi-modal framework for indoor localization tasks in the mobile edge computing environment. Presented work focuses on the multiple models’ based localization approaches, its drawback and finally proposes theoretical solutions to overcome its shortcomings. There exist many machine learning models for RSS based indoor localization tasks. Even though they displayed their effectiveness in the test environment, but failed to repeat the same performance in practical situations. There are many factors like refrigerators, temperature, doors in indoor areas which can affect the localization performance. Theoretically, building distinct models for distinct surroundings is an effective method for indoor localization. But multiple models based approaches will also have drawbacks. Too many models have to be built, the presence of unstable factors which affect RSS are the major drawbacks. To solve these issues, two combinatorial optimization problems are formulated: external feature selection problem and model selection for location estimation problem. NP-hardness of the problems is analyzed in this work.

Wireless technology based indoor localization systems are prone to errors because of non-line of sight issues, inconsistency in received signals, fluctuation in RSS, etc. In large scale wireless-based localization systems, while comparing with the number of sensors, information is sparse. The main challenge in these systems is recovering the sparse signals for further processing to localize the user. Compressive sensing is a popular signal processing technique to efficiently acquire and reconstruct signals. This technique is used in wireless-based indoor positioning systems [ 100 , 101 ] to recover sparse signals efficiently. Many of the existing compressive sensing techniques are intended to solve the issues for a single application and it lacks dynamic adaptability. Zhang et al. [ 102 ] proposed a learning-based joint compressive technique to solve the challenges in compressive sensing techniques. They introduced a learning technique that can learn the basis of sparse transformation from compressive sensing measurement output instead of historical data. Acquiring a big amount of historical data is costly and learning from specific historical data can affect the dynamic adaptability.

A hybrid navigation system that combines magnetic matching (MM), PDR and Wi-Fi fingerprinting was proposed in [ 103 ]. Since such systems combine different approaches, the user can even navigate through the regions were Wi-Fi signals are poor or environments have indistinctive magnetic feature. The location of the user was resolved by calculating the least value of the mean absolute difference between the estimated fingerprint or magnetic profile and the predetermined value of the respective candidate in the database. An attitude determination technique [ 104 ] and PDR [ 105 ] method were integrated for implementing the proposed navigation algorithm. To improve the Wi-Fi and MM results, three separate levels of quality control method using PDR-based Kalman filter were introduced. The results demonstrated that the proposed method has an accuracy of 3.2 m in regions with sufficient number of APs and 3.8 m in regions with poor numbers of APs.

iBill [ 106 ] integrates an iBeacon device, inertial sensors, and magnetometer to localize the users in large indoor areas using a smartphone. iBeacon is a variant of BLE protocol developed by Apple Inc. [ 107 ]. The proposed system contains two operational modes. If the user is within the range of the beacons, then a RSS-based trilateration algorithm is adopted for localization. Otherwise, the system will enter the particle filter localization (PFL) mode, which considers magnetic fields and data from inertial sensors for localization. Since the PFL mode itself cannot compute the initial position of the user, it will assume the last location obtained in the iBeacon localization mode as the initial position of the user. The accelerometer data and gyroscope data are used for updating the location and direction of the particles, respectively. These particles are utilized to represent the walking distance and direction of the user. To overcome the limitation of using magnetic fields only for assigning weights for a particle in the particle filter method, the system considered the probability distribution of step length and turning angles of particles to determine the weight. The iBILL system reduced the computational overhead of PFL and solved the limitations associated with the unknown initial location and heavy shaking of smartphone. iBill achieved less error in localization compared to the dead reckoning approach and Magicol [ 108 ]. Magicol system combines magnetic fields and Wi-Fi signals using a “two-pass bidirectional particle filter” for localization. Magicol consumes less power compared to systems that rely only on Wi-Fi signals. In the Magicol and dead reckoning approaches, the error in localization increased drastically while walking for more steps. However, iBill showed consistency in localization accuracy while walking for long time (more steps case) also.

Lee et al. [ 109 ] proposed an indoor localization system that utilizes inbuilt sensors in smartphones, such as Bluetooth receivers, accelerometers, and barometers. The RSSIs of the signal received from Bluetooth beacons are used for location estimation with the help of the trilateration algorithm. PDR has been used to reduce the uncertainty in RSS identifiers, which improved location estimation by tracing the direction and steps of a normal user. Atmospheric pressure determined using a barometer was utilized for vertical location estimation. Due to the limitations of sensors in smartphones, the proposed method could not deliver satisfactory results in real-world scenarios.

A simple but efficient Bluetooth beacon-based navigation system using smartphones was proposed in [ 110 ]. The system utilizes RSSI measurements for position estimation. The positioning algorithm [ 111 ] initially measures the RSSI from each beacon and perform a noise removal operation. The “Log-Path Loss model” [ 112 ] based on the mean of the RSSI values is utilized for the estimation of distance from each beacon. The algorithm implements the proportional division method to estimate the position of the users when they are near to two or more beacons. In the proportional division method, the line representing corridor where beacons are installed is divided with respect to the distance between two nearby beacons. When only one beacon is near to the user and another one is far, the algorithm assumes the user’s position is on the other side of that nearby beacon. Dijkstra’s shortest path algorithm was adopted for finding the shortest route for navigation. The system performed well in a small indoor region.

DRBM [ 113 ] is a dead reckoning algorithm that combines a “Bluetooth propagation model” and multiple sensors for improved localization accuracy. The “Bluetooth propagation model” utilized the linear regression method for feature extraction. An individual parameter that varies with the characteristics of the users was integrated with data from accelerometers for calculating the exact steps covered by the user. Subsequently, the results from Bluetooth propagation model and sensor-based step calculation method were fused using a Kalman filter for improving the accuracy of positioning. The results demonstrated that the positioning errors were within 0.8 m.

Reference [ 114 ] examined the performance of machine learning classifiers, such as SVM, Random Forest and Bayes classifier, for the Bluetooth low-energy beacon fingerprinting method. The experimental infrastructure was created using beacons provided by Estimote and iBeeks. Both types of beacons use Eddystone profiles developed by Google. These authors evaluated the performance of algorithms for different smartphones with a preinstalled fingerprinting Android app. Eddystone packets from each beacon are scanned over a period of time to obtain the RSSI values. The MAC of the beacon and associated RSSI values are logged for further training processes. The open source project 'Find' was adapted for the whole task. Several machine learning algorithms are already available in ‘Find’ servers. The results showed that Random Forest increased the accuracy of positioning by 30% compared to the Bayes classifier and a 91% correct identification of location.

In recent years BLE beacons based technology has been used for the development of assistive navigation systems for people with visual impairments. A blind or visually impaired user with a minimum knowledge of smartphones can utilize these systems to find the indoor ways in train stations, museums, university premises, etc. Basem et al. [ 115 ] proposed a BLE beacons based indoor navigation system for people with visual impairments. The proposed system utilized the fuzzy logic framework for estimating the position of the user in indoor areas. The basic methodology utilized for indoor positioning is BLE fingerprinting. Authors analyzed the performance of various versions of the fingerprinting algorithm including fuzzy logic type 1, fuzzy KNN, fuzzy logic type 2 and traditional methods such as proximity, trilateration, centroid for indoor localization. The fuzzy logic type 2 method outperformed all other methods. The average error of localization obtained in the fuzzy logic type 2 approach is just 0.43 m.

Murata et al. [ 116 ] proposed a smartphone-based indoor localization system that can be extended for blind navigation in large indoor environments that contains multistoried buildings. The proposed work addressed six key challenges for smartphone-based indoor localization in large and complex environments. The challenges are associated with the mobility of the user and the nature of large scale environments. The challenges include accurate and continuous localization, scaling the system for multiple floors, Varied RSS values from the same transmitter to different devices located in the same location, varied walking patterns of individuals, signal delay, etc. The authors improved the probabilistic localization algorithm using various techniques to address the above-mentioned challenges. RSSI from BLE beacons and data from embedded IMUs in the smartphone are utilized for location estimation. The proposed system was evaluated in a large shopping mall (21,000 m 2 area) with 10 individuals including blinds and people with low vision. It is observed that the proposed techniques reduced the mean localization error from 3 to 1.5 m while using the probabilistic localization algorithm.

Ahmetovic et al. [ 117 ] proposed a smartphone-based indoor navigation system for people with visual impairments. The proposed system namely, NavCog relies on RSSI from BLE beacons and inbuilt sensors of smartphones for localizing the user in indoor areas. The location of the user was estimated using a fingerprint matching algorithm. There are many fingerprint matching algorithm proposed in the literature. Here, the author chose a variant of the KNN algorithm to compute the location of the user by matching the observed RSSI value with RSSI fingerprints stored during the offline stage. Apart from basic localization and navigation service, NavCog can notify the user about their surroundings regarding the point of interest or stairs or elevators etc. NavCog was evaluated in a university campus with the help of six people with visual impairments. They recorded all the experiments using a video camera to see whether the user is missing any turn during navigation, waiting for instructions, hitting any obstacles, etc. Current version of NavCog lacks the functionality to notify the user when they are traveling in the wrong way.

Kim et al. [ 118 ] proposed a smartphone based indoor navigation assistant for people with VI impairments. The system namely, StaNavi uses the smartphone and BLE beacons attached in the indoor areas to guide the users in a large train station. Along with the RSS from BLE beacons, data from the inbuilt compass of the smartphone is utilized to estimate the position and orientation of the visually impaired users. A commonly used indoor localization method called proximity detection technique was used to compute the user’s position. The StaNavi system makes use of a cloud-based server for providing navigation route information. Similar to StaNavi, GuideBeacon [ 119 ] indoor navigation system also utilizes the smartphone compass and BLE beacons to estimate the position and orientation of the visually impaired users in the indoor environment. But GuideBeacon used the low-cost BLE beacons to facilitate indoor tracking. The position estimation procedure includes identification of the nearest beacons for a user by using the proximity detection technique. GuideBeacon can provide audio, haptic and tactile feedback to the visually impaired user. Reference [ 120 ] proposed an indoor navigation system for people with visual and hearing impairments. The proposed system utilized proximity detection as well as nearness to beacons techniques in localization algorithm to track the position of the user. It is noted that in the last 5 to 6 years only the development of BLE beacons based navigation systems have become popular. It can be due to the availability of smartphones for low cost, less cost of beacons compared to other RF transmitters which have been used before. In the case of Blind navigation, only a few BLE beacon based systems have been proposed in recent years.

ISAB [ 121 ] is a wayfinding system developed for assisting people with VI in libraries, and it utilizes various technologies such as Wi-Fi, Bluetooth, and RFID. Here, each communication technology was used for different purposes. First, Wi-Fi fingerprinting was used for localization and navigating through the entrance of the building to the desired floor. Floor plans of the indoor environment were represented as graphs, and Dijkstra’s algorithm was implemented for path planning. Bluetooth technology was used for navigating users to the desired shelf where the desired item is placed. Each shelf contains a shelf reader where a Bluetooth module is attached. The user can pair their smartphones with this Bluetooth module, and the shelf reader will provide instructions to the user. Finally, RFID technology was implemented to find the desired item on the shelf, where each item is embedded with a RFID tag. Additionally, an effective user interface was developed for simplifying interactions of blind people with the system. The proposed system helped the users to reach towards a target with a maximum accuracy of 10 cm.

PERCEPT [ 122 ] is a RFID technology-based navigation system developed for people with VI. PERCEPT consists of passive RFID tags pasted on the indoor areas, a “glove” that consists of a RFID reader, and kiosks placed at entrances, exits of landmarks. The kiosks contain information about key destinations and landmarks. Additionally, an Android smartphone that provides instruction to the user through a text to speech engine. An Android phone will communicate with the glove and PERCEPT server using Wi-Fi, and Bluetooth. The directions provided by the PERCEPT system lacks proximity. Moreover, the direction was not presented in terms of steps or feet.

PERCEPT II [ 123 ] includes a low-cost navigation method using smartphones alone (gloves were omitted). The cost for system deployment was decreased by creating a survey tool for orientation and mobility that aids in labeling the landmarks. NFC tags were also deployed in specific landmarks for providing navigational instructions by means of audio. The navigation module implemented Dijkstra’s algorithm for route generation.

A RFID-based indoor wayfinding system for people with VI and elderly people was proposed in [ 124 ]. The proposed system consists of a wearable device and a server. The wearable device consists of a RFID reader that can read passive tags, an ultrasonic range finder for detecting obstacles in the path and a voice controller. The server comprises of a localization module as well as navigation module. The navigation module implements Dijkstra’s algorithm for path planning. For efficient localization, authors considered the normal movements of a person with vision while developing the system. The navigation module was linked to an obstacle avoidance algorithm where obstacles are categorized as expected and unexpected by assigning a predefined probability measure. Again, these obstacles were categorized as mobile and fixed and a triangle set is formulated for detecting mobile obstacles. An earphone was also embedded in the system for providing effective guidance to the user.

Roll Caller [ 125 ] introduced a method that relates the location of the user and the targeted object based on frequency shifts caused in the RFID system. The Roll Caller prototype comprises of passive RFID tags attached to items, RFID reader with multiple numbers of antennas, and a smartphone with inertial sensors, such as accelerometers and magnetometers. An anchor timestamp was used to represent the value of the frequency shift. This anchor time stamp was integrated with inertial measurements, such as acceleration and the direction from the sensors of smartphone for allocating antennas. The proposed method reduced the system overhead since the location of person and item are not calculated separately. Instead, a spatial relationship between the object and users was introduced to locate them.

DOVI [ 126 ] combined IMU and RFID technology to assist the people with VI in indoor areas. DOVI’s navigation unit consists of a chip (NavChipISNC01 from InterSense Inc.) has a three-axis accelerometer, barometer, and magnetometer. An extended Kalman filter was included for compensating the sensor and gravity biases. While RFID module was implemented to reduce the drift errors in IMU. Dijkstra’s algorithm was implemented to estimate the shortest navigation routes. A haptic navigation unit was present in DOVI that provides feedback/instruction to the user about navigation by means of vibration.

Traditional RFID positioning algorithms were facing fluctuations in location estimation due to multipath and environmental interference in RFID systems. To take care of this issue a new positioning algorithm called BKNN is introduced in [ 127 ]. BKNN is the combination of Bayesian probability and K-NN algorithm. In the implemented UHF-RFID system, RSS values were analyzed using Gaussian probability distribution for localization. The irregular RSS were filtered out using Gaussian filters. Integration of Bayesian estimation with K-NN improved the localization accuracy. Hence, the average error in location estimation of the proposed system was approximately 15 cm.

A VLC technology-based navigation system that utilizes existing LEDs inside an indoor environment was proposed in [ 128 ]. The proposed system comprises of four LED bulbs attached to the ceiling of the room and they were interconnected using the same circuit to operate as a single optical transmitter. Trilateration algorithm was implemented for locating the receiver/user. Target’s path was tracked using Kalman filtering and sequential important sampling particle filtering methods. They also examined the performance of the Kalman filter and particle filter for tracking the users. The demonstrated results show that particle filter is better compared to the Kalman filter for user tracking.

AVII [ 129 ] is a navigation system for visually impaired people using VLC technology. Along with VLC-based positioning, a geomagnetic sensor was introduced for providing accurate direction. A sonar sensor was also embedded in the system for detecting obstacle along the navigation path. Dijkstra’s algorithm was modified and utilized in the system that enables the user to select the best and shortest navigation routes. The system give instructions to the user through the embedded earphone in the form of audio signals.

In [ 130 ], a VLC-based positioning system was integrated with magnetic sensors of the Android smartphone for assisting people with VI in indoor environments. The proposed prototype consists of an Android phone for calculating the position of the user. A speech synthesizer system inside a smartphone provides instructions to users through the earphone. The latitude and longitude of each location will be stored as visible light ID in each visible light associated with that location. Once the visible light receiver obtains information about the light ID from visible light, it will transmit the ID to a smartphone via Bluetooth. The smartphone integrates this information with the directional calculation from geomagnetic sensors and provides route instructions to the user. The smartphone was attached to a strap that hung freely around the user’s neck. Due to the irregular motion of the users, the strap swung more than expected, which led to errors in reading the geomagnetic sensor and errors in position estimation.

Reference [ 131 ] proposed a method of mitigating random errors in inertial sensors and removing outliers in a UWB system. Multipath and nonline of sight conditions were the reason for outliers in the UWB systems. The proposed system consists of a UWB system and an inertial navigation system. The inertial navigation system consists of an accelerometer, a gyroscope, and a magnetometer. The UWB system makes use of the TDOA method and least square algorithm for position estimation. An “anti-magnetic ring” was introduced to remove the outliers in the UWB system under non-line of sight conditions, and it was the first method to do so. For improved positioning accuracy, the information from the accelerometer and UWB system was fused using a “double-state adaptive Kalman filter” algorithm based on the “Sage-Husa adaptive Kalman filter” and “fading adaptive Kalman filter”. The results showed that the inclusion of “anti-magnetic ring” and “double state adaptive Kalman filter” algorithm reduced the positioning errors.

Table 3 provides a comparison of communication technology-based indoor navigation and positioning systems.

Pedestrian dead reckoning based indoor positioning and wayfinding systems

Hsu et al. [ 132 ] developed a system that only depends on inbuilt sensors of smartphones and is devoid of any external infrastructure. Here, the user’s steps were detected by integrating values of acceleration along three axes obtained from accelerometer data. The user’s step length was calculated by combining the maximum acceleration values and minimum acceleration values. Since step length varies with the person, an individual parameter that varies with the users was also fused with the maximum and minimum acceleration values to detect steps. Direction changes of the users were decided from the data provided by the gyroscope. Since the PDR approaches may result in errors in localization, a calibration mark is provided in the map and floor. The main limitations of this work were the variations in the data provided by sensors caused by the holding position of phones (like in pocket or bag) and the absence of a path planning algorithm, which increased the difficulty of navigation.

Hasan and Mishuk [ 133 ] proposed a PDR-based navigation method for smart glasses. Since PDR methods need sensors for acquiring data, these authors introduced the smart intelligent eye ware called “JINS-MEME”, which contains a three-axis accelerometer and gyroscope. Usually, PDR methods will calculate the current position of a user from his last known position; therefore, the initial position of the user should be known for tracking his position. Calculation of the current position of a user requires the step length of the user, number of steps covered by the user from his last known position and azimuth or heading angle of the user. Data from the three-axis accelerometer is used for step detection process. If the sensors are mounted on foot, then step detection can be achieved by utilizing the zero-velocity update. However, the sensors in this system were attached on to the smart glass, and the norm of the accelerometer was utilized for detecting steps. If the norm of the accelerometer crosses a predefined value, it is then considered as one step. Since step length varies with the user, a parameter is included for step length calculation and these parameter is obtained from an experiment where 4000 steps of 23 people were analyzed. In addition, the “extended Kalman filter” was introduced to merge the values from accelerometer and gyroscope. The data merging approach was utilized for calculating the heading angle to rectify the errors of PDR and gyro sensor such as bias, noises, tilts, etc. Instead of using data from sensors individually, fused data are more accurate for further calculation.

Ju et al. [ 134 ] proposed a PDR based navigation system for smartphones that uses “multiple virtual tracking (MVR)”, The proposed system solves the limitation of existing methods in which all walls and passages are considered parallel or perpendicular. It also solves the limitation of walking indoor for a long time. Microelectromechanical system IMUs are comprised of three-axis accelerometers, magnetometers, and gyroscopes and they are employed to calculate the position. The proposed system does not rely on existing infrastructure or designed maps. The proposed MVR algorithm uses the concept of the map-matching algorithm to examine potential pedestrian’s trajectories [ 135 ], and it passes through two stages; in the first stage, “the main track utilizes the dominant direction as a matching function when it is significantly reliable”, and in the second stage, “the data obtained on an ambiguous direct straight line to utilize the dominant direction is expanded by multiple virtual tracking for diversified cases”. Generally, the PDR system passes through the following four steps: (1) step detection, in which peak detection approach is employed to detect the accurate step; (2) step length estimation, in which the relationship between the walking status and the step length is determined by the accelerometer; (3) heading estimation, in which the cumulative error over time of the “Attitude Reference System” is used to reduce the accumulated heading error and an “extended Kalman filter (EKF)” is designed to compute the heading error as well as the gyroscope biases; (4) position calculation, in which the obtained heading value through the “Attitude Reference System” and the previous step are used to compute the user’s current location and the Mahalanobis distance is employed to obtain the similarity between the dominant direction and the estimated heading. The proposed PDR-Multiple virtual tracking system passes through (1) basic tracking steps described earlier; (2) virtual trajectory awareness, in which a dominant direction block might be added to check if the user is in the dominant direction even in unreliable situations; and (3) “virtual track extension and reduction”, in which the virtual track is extended if there is a likelihood of a dominant direction when a new straight line appears. The experimental results of the proposed PDR-Multiple virtual tracking system demonstrated its effectiveness when compared with conventional PDR systems that use the dominant direction in sophisticated trajectories.

Hsu et al. [ 136 ] proposed a navigation system for pedestrian localization, and it is composed of a triaxial accelerometer, triaxial magnetometer, triaxial gyroscope, Bluetooth transmission module, and a microcontroller. The intention of the developed system is to reduce the integral error to accurately estimate and construct the walking trajectories of the pedestrian. The system does not require external positioning techniques, and it comprises of a wearable inertial navigation device and a computer device. The navigation device can be placed on the foot of the pedestrian to construct walking trajectories. A walking trajectory algorithm that consist of trajectory height estimation function, and trajectory reconstruction function, is implemented to build user’s trajectories. The signals are received by the computer through the Bluetooth wireless module. The sensor merging method, which is based on “double-stage quaternion”, uses the EKF to merge the angular velocity, acceleration, and magnetic signals. A rotation stage is implemented to provide a stable rotation rate and eliminate the interference resulted from the global earth rotation rate. During the experiment phase, users were asked to wear the device and walk for both long and short periods. The distance error and the end-to-end error were approximately 8.33 m and 4.81 m, respectively. The average height errors were 6.42% and 3.60% for walking downstairs and upstairs, respectively.

In [ 137 ], a dead reckoning approach for estimating and tracking a user using smart handled phones was proposed. The proposed approach solves the limitation of positioning mobile phones and the surrounding environment, and it depends on identifying the relative variations in the distance traveled by users’ walking direction. The actual position of a user is approximated by combining distance traveled and previous position information. Calibration is required to adjust the initialization of the algorithm to recognize the movement path of the user with respect to a reference system. The navigation path is estimated by a sequence of orthogonal segments. Each orthogonal segment consists of the distance computed in steps and the movement heading determined with respect to the reference system. The movement direction, or the heading, is computed from the measurement data of the magnetic field. If the algorithm is not capable of identifying the rotation of the user or if sudden variations occurred in the data, then the algorithm will create an alarm to perform the calibration again. The current PDR approach does not consider the slight rotation of a user, and an assessment of this approach indicated that the maximum error obtained in the system is not more than four steps.

PDR based indoor positioning systems are prone to error in localization because of sensors biases, drift, etc. Recent PDR based systems have introduced the multi-sensor fusion approaches and multiple positioning technology integrations [ 138 , 139 , 140 ] to reduce the integral and drift errors observed in PDR based systems. Qiu et al. [ 141 ] proposed a multi-sensor fusion approach for alleviating the error present in traditional PDR based systems. The proposed work utilized a foot-mounted magnetic/inertial sensors to estimate the location of the user. The foot mounted sensor ‘Xsens’ contains 3 axis accelerometer, gyroscope, and magnetometer. A walking stance phase detection algorithm which utilizes the data from the accelerometer, as well as gyroscope, was introduced in this work. Sensor fusion is achieved by implementing the extended Kalman filter method. The performance of the sensor fusion approach was evaluated in both indoor as well as outdoor environments. Demonstrated results show that the overall error obtained in the proposed system was less than 1% of the total traveled distance.

The integration of multiple positioning technologies or sensor fusion approaches can improve the accuracy of PDR based systems and vice versa. It means PDR technologies have been integrating with other technologies to alleviate the positioning errors. Kuang et al. [ 142 ] proposed a smartphone-based indoor positioning approach that combines magnetic matching and PDR technology. Magnetic matching technology has been used alone or along with Wi-Fi or BLE technology for indoor navigation in recent years. While using Magnetic matching alone, it is hard for a system to differentiate the magnetic field at a single point from other near points. In this context, the authors came with a solution to integrate the magnetic filed sequence along with the traveled path contour estimated by PDR technology. Moreover, the drift errors generated in the PDR method are regulated by an extended Kalman filter with the help of the reference magnetic field sequence. Gauss–Newton iterative technique was utilized to compare the measured magnetic field sequence with reference magnetic field sequences. The demonstrated result shows that the proposed method achieved an accuracy not more than 2.5 m with less computational load compared to other existing solutions.

A real-time smartphone-based indoor localization system that combines BLE technology along with the PDR approach has been proposed in [ 143 ]. The proposed system utilizes the inertial measurement units in the smartphone and RSS from the BLE beacons to estimate the position of the user in indoor areas. A smartphone application was built to fuse the inertial data and RSS from beacons. The inertial data is used to compute the step length as well as the heading angle. Step detection was achieved by analyzing the height of each jump during every zero crossings. If the height is above the specific pre-assigned threshold value, a step is detected. Zero crossings are the instants where a signal or value changes its sign. Step length estimation technique adopted a state of the art approach which require the length of the user’s leg and vertical change of body center of mass as input along with a correction factor defined in the literature. The heading estimation procedure includes the fusion of data from accelerometer and gyroscope followed by computation of device’s attitude. The attitude is further converted to quaternions and Euler angle representations. The relationship between these representations is used to find the heading angle. Later the step length and heading angle are fused with RSS from BLE beacons to reset the location of the user and decrease the drift error.

Shan-Jung et al. [ 144 ] propose an indoor navigation system which integrate PDR technology along with the Wi-Fi fingerprinting method. The work mainly focuses on calibrating the fingerprinting database with the aid of inertial measurement units in the smartphone device. A quaternion based extend Kalman filter was employed for sensor fusion to reduce the positioning error in the PDR method. In the PDR method, accelerometer data were utilized to detect valid steps. Noise obtained in the accelerometer was filtered out by using a low pass filter. The step detection algorithm can able to identify the steps from both the magnitude phase as well as the temporal phase. A pair of peaks and valleys are considered as one step. An existing method is adopted for computing the step length. The step length computation method considers the standard deviation of the acceleration data since the stride length and walking speed are related. The landmarks in indoor areas can be utilized for providing directions to the user. Here, the Wi-Fi fingerprinting method has been utilized for identifying the nearest landmark. If the user has reached any landmark, the integrated errors in the PDR method will be reseted. In this way, the drift errors in the PDR system are alleviated.

Mercury [ 145 ] is a network localization and navigation system using smartphones for indoor applications. Localization of the user is achieved by fusing the various measurements from the IMU and range measurements among the users. The range measurements between various users are acquired using acoustic signals. The Mercury system consists of a built-in IMU, speaker, earphone of a smartphone and Bluetooth transceiver. Here, the users will localize themselves by utilizing the temporal cooperation, the spatial cooperation among them along with the knowledge about the map. The acceleration and angular velocity provided by the IMU were utilized for phone orientation estimation [ 146 ]. The obtained acceleration samples based on the phone orientation were transferred from the “phone’s coordinate system” to the “Earth’s coordinate system”. The step direction and step length of users were calculated by utilizing the acquired acceleration samples. For range measurements, the acoustic signals were recorded and transmitted using earphones and speakers, and these tasks are performed when a user performs spatial cooperation. Spatial cooperation is achieved by finding the range between the user followed by exchanging the position information. A user will try to sense the acoustic signals produced by another user to measure the range. If the acoustic signals are absent, then the user will perform two-way ranging and measure the time taken for two-way propagation to estimate the range [ 147 ]. The step direction and step length calculated from the IMU measurements and range measurements were fused with map information using an algorithm called Belief Propagation . The belief propagation algorithm will find the positional belief of the user. Bluetooth is used to exchange the positional belief of the users among them. During system evaluation the map of the indoor environment was partitioned as small squares with dimensions of 0.7 m \(\times\) 0.7 m because the belief propagation algorithm requires this partition. In case of single user scenario, the Mercury system is compared with the two systems Mapcraft [ 148 ] and a system using a Kalman filter technique [ 149 ]. The Mercury system proved its robustness compared to the other two systems, even in the absence of the user’s initial position. In multi-user scenarios, the Mercury achieved an exceptional localization performance because of spatial cooperation. Table 4 shows a comparison of PDR-based navigation systems.

Evaluation criteria

In this section, we propose criteria to be considered while evaluating indoor positioning and navigation systems. These proposed criteria will be helpful for investigations into positioning and navigation system. Moreover, considering these factors in the development stage can result in an ideal navigation system.

Accuracy and precision

Accuracy is one of the main performance metrics of a navigation system. This metric is mainly associated with the indoor positioning module of the navigation system. The error in localization is expressed in terms of accuracy. It is computed as the average Euclidean distance between the ground truth location coordinates and estimated location coordinates. A more realistic approach compared to Euclidean distance was introduced in [ 150 ]. This method depicted floor plans, obstacles and interfloor traversing routes as polygons for error estimations.

Precision deals with the consistency of system performance, or the consistency in positioning over time and various scenarios. The precision of the system can be represented in terms of the cumulative distribution function. In normal human navigation, the fall-off in these metrics up to a limit can be acceptable, although for people with VI, these fall-offs may affect their safety.

The cost of navigation systems can be classified among the positioning, navigation and HMI modules. In particular, the cost of the positioning module includes the individual costs of infrastructure components and their maintenance and devices for position estimation. The employment of Wi-Fi or VLC-based systems can reduce the cost of the infrastructure components because most building are installed with Wi-Fi APs or LEDs. However, the initial implementation cost is high for Wi-Fi APs compared with Bluetooth beacons or RFID tags. The cost of navigation modules is associated with the adopted map construction methodologies. Google indoor maps is an open source SDK, but its service is limited to few countries. Several other paid map building SDKs are available in the market.

Usually, the HMI module does not account for a large contribution to the cost of the whole system. In smartphone-based navigation or positioning systems, users utilize speakers, microphone and earphones to interact with the system. The noises from the navigation environment create difficulties for people with VI when using audio feedback. Thus, a haptic feedback system must be implemented along with audio feedback

Scalability

The scalability of the system can be evaluated by considering two parameters, namely, geography and the number of users. Geography represents the area of the indoor environment covered. An increase in the number of users in the same region can create confusion in positioning due to interference from the signals of communication technology-based systems. Computer vision-based systems also encounter problems in indoor scene recognition due to occlusions created by other users.

Robustness means the ability of the system to withstand adverse conditions, such as component malfunctions and losses of signal. The system should provide the navigation and tracking of the user even if one or two infrastructure components fail or malfunction. In particular, Wi-Fi based system or BLE based system should work properly even if one or two Wi-Fi APs or BLE beacons fails to work.

Navigation systems are developed for reducing manual effort and time in the wayfinding process. The system design should consider the preferences of consumers. In this context, the size of the system, power consumption, and real-time performance have to be considered. The positioning systems should pull location information in real time, and the navigation module should provide real-time route presentation and turn by turn directions. This design suits people with no disabilities or physical impairments. However, people with VI require additional feedback, such as haptic feedback, an obstacle detection module, and a location awareness module for compensating their disability.

Future work and discussion

Computer vision-based navigation and positioning systems can provide better mindfulness about encompassing environments compared with systems that utilize communication technologies or PDR approaches. Thus, computer vision-based frameworks are more appropriate for navigation by individuals with VI. In computer vision-based systems, deep learning methodologies are observed to be more precise than pure conventional methodologies. The hybrid technique that utilizes deep learning methodologies for scene recognition or image retrieval tasks and SIFT or ORB features for position estimation achieved better accuracy compared with pure deep learning methodology-based systems. Three-dimensional feature-based localization methods solved the limitations associated with SIFT- or SURF-based matching methods. The impact of human occlusions adversely affects visual feature-based positioning [ 81 ]. The elimination of human objects from visual scene recognition process can be further explored to solve these issues for both static and dynamic camera setups.

Compared to 2-dimensional image feature based approaches 3-dimensional features and RGB-D image-based methods are more reliable for indoor navigation. Visual positioning systems which are considered as the future of indoor navigation technology utilize RGB-D images to train and learn the models for localizing the user in indoor areas. Most of the RGB-D based methods have not extended to a fully working indoor navigation system. Instead many articles have proposed the methods and done offline testing in the publicly available dataset. Only a few works had extended RGB-D indoor positioning techniques to fully working indoor navigation systems, but still, it is implemented in a client–server manner. Because every mobile device is not able to bear the high computation required for position estimation tasks. Optimization of visual indoor positioning models to deploy in mobile devices like smartphones is one of the least explored topic in this research domain. There exists some systems that utilized Google tango VPS for the development of indoor navigation. But Google tango is supported in only a few devices and currently, Google has stopped the support on Google tango. But Google’s new tool ARCore provides similar features like Google tango. It can be extended for the development of visual positioning based indoor navigation systems.

Communication technology-based approaches that integrate PDR methodologies or magnetic fingerprinting methods improved the coverage and precision of the system. The drift errors and initial position estimation problems of PDR-based systems are alleviated by introducing communication technologies, such as BLE and Wi-Fi, or magnetic fingerprinting approaches along with PDR. Fingerprint spatial gradients (spatial relation between RSS fingerprints of nearby locations) reduced the issues associated with RSS fluctuations. PDR systems integrated with Bluetooth technology seem to be more precise, and such systems can be further extended to correct radio maps of Wi-Fi.

Other than seven indoor positioning technologies discussed in this article, there exist many other technologies like audio signal based localization [ 151 ], magnetic field-based localization [ 152 ], etc. Audio signal based localization or acoustic localization is more accurate and cheaper compared to other RF technologies [ 153 ]. Because acoustic localization requires microphones and speakers which are available in every smart mobile device. Moreover, RF signal speed is very much higher compared to the sound and it implies acoustic localization can provide higher accuracy. In this context, acoustic localization technology can be combined with BLE or Wi-Fi-based approaches, where BLE or Wi-Fi can be utilized for rough location estimation and acoustic signals for computing the precise location.

Step length estimation is crucial task in PDR-based systems. Step length depends on the user’s movement, velocity, and physical properties, such as height. The step length estimation task is still an open research issue. Precise step length estimation can minimize the accumulated errors of PDR systems. The step length for walking, running and walking with heavy load will be different for a same person. Differentiating the different walking scenarios is still an open challenge in the research of PDR based navigation system. Another interesting research direction in the field of PDR systems is deploying deep learning algorithms to determine the type of pedestrian movement by utilizing data from the gyroscopes and accelerometers installed in smartphones.

This paper presents a detailed overview of the advancements in systems for indoor positioning and wayfinding. We classified the existing systems based on the adopted positioning technologies. We provided a comprehensive review of the various proposed indoor positioning and wayfinding methods in the last 6 years. Moreover, this work analyzed its advantages and limitations. This article also discussed different assessment criteria for evaluating navigation and positioning systems. We further provided potential research directions for future research in indoor positioning and wayfinding systems.

Availability of data and materials

Not applicable.

Abbreviations

Global Positioning System

Radio frequency

Pedestrian dead reckoning

Radio Frequency Identification

Ultra wide band

Global Navigation Satellite System

Human machine interaction

Speeded Up Robust Feature

Scale Invariant Feature Transform

Visible light communication

Received signal strength

Angle of arrival

Time of arrival

Time difference of arrival

Bluetooth low energy

Visual impairments

Inertial measurement unit

Optical character recognition

Simultaneous localization and mapping

Continuous adaptive mean

Principle component analysis

Oriented Fast and Rotated Brief

Support Vector Machine

Lowest common subsequence

Access point

K nearest neighbor

Fingerprint spatial gradient

Magnetic matching

Particle filter localization

Multiple virtual tracking

Extended Kalman filter

Godha S, Lachapelle G (2008) Foot mounted inertial system for pedestrian navigation. Meas Sci Technol 19(7):075202. https://doi.org/10.1088/0957-0233/19/7/075202

Article   Google Scholar  

Meers S, Ward K (2005) A substitute vision system for providing 3d perception and gps navigation via electro-tactile stimulation

Koyuncu H, Yang SH (2010) A survey of indoor positioning and object locating systems. IJCSNS Int J Comput Sci Netw Secur 10(5):121–128

Google Scholar  

Zandbergen PA, Barbeau SJ (2011) Positional accuracy of assisted GPS data from high-sensitivity GPS-enabled mobile phones. J Navig 64(3):381–399. https://doi.org/10.1017/S0373463311000051

Bay H, Ess A, Tuytelaars T, van Gool L (2008) Speeded-up robust features (surf). Computer vision and image understanding. Similarity matching in computer vision and multimedia 110(3), 346–359. https://doi.org/10.1016/j.cviu.2007.09.014

Wang H, Zhang S (2011) Evaluation of global descriptors for large scale image retrieval. In: International conference on image analysis and processing, Springer, pp 626–635

Lindeberg T (2012) Scale invariant feature transform. Scholarpedia 7(5):10491. https://doi.org/10.4249/scholarpedia.10491

LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436. https://doi.org/10.1038/nature14539

Lee YH, Medioni G (2015) Wearable RGBD indoor navigation system for the blind. In: Agapito L, Bronstein MM, Rother C (eds) Computer vision—ECCV 2014 workshops. Springer, Cham, pp 493–508

Chapter   Google Scholar  

Kamisaka D, Muramatsu S, Iwamoto T, Yokoyama H (2011) Design and implementation of pedestrian dead reckoning system on a mobile phone. IEICE Trans Inf Syst 94(6):1137–1146

Ban R, Kaji K, Hiroi K, Kawaguchi N (2015) Indoor positioning method integrating pedestrian dead reckoning with magnetic field and wifi fingerprints. In: 2015 eighth international conference on mobile computing and ubiquitous networking (ICMU), pp 167–172. https://doi.org/10.1109/ICMU.2015.7061061

Woodman OJ (August 2007) An introduction to inertial navigation. Technical report UCAM-CL-TR-696, University of Cambridge, Computer Laboratory. https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-696.pdf . Accessed 10 Nov 2019

Bouet M, dos Santos AL (2008) Rfid tags: positioning principles and localization techniques. In: 2008 1st IFIP wireless days, pp 1–5. https://doi.org/10.1109/WD.2008.4812905

Fu Q, Retscher G (2009) Active RFID trilateration and location fingerprinting based on rssi for pedestrian navigation. J Navig 62(2):323–340. https://doi.org/10.1017/S0373463308005195

He S, Chan S-G (2016) Wi-fi fingerprint-based indoor positioning: recent advances and comparisons. IEEE Commun Surv Tutor 18(1):466–490. https://doi.org/10.1109/COMST.2015.2464084

Farid Z, Nordin R, Ismail M (2013) Recent advances in wireless indoor localization techniques and system. J Comput Netw Commun 2013:12. https://doi.org/10.1155/2013/185138

Do T-H, Yoo M (2016) An in-depth survey of visible light communication based positioning systems. Sensors. https://doi.org/10.3390/s16050678

Alarifi A, Al-Salman A, Alsaleh M, Alnafessah A, Al-Hadhrami S, Al-Ammar MA, Al-Khalifa HS (2016) Ultra wideband indoor positioning technologies: analysis and recent advances. Sensors. https://doi.org/10.3390/s16050707

Hart PE, Nilsson NJ, Raphael B (1968) A formal basis for the heuristic determination of minimum cost paths. IEEE Trans Syst Sci Cybern 4(2):100–107. https://doi.org/10.1109/TSSC.1968.300136

Johnson DB (1973) A note on dijkstra’s shortest path algorithm. J ACM 20(3):385–388. https://doi.org/10.1145/321765.321768

Article   MathSciNet   MATH   Google Scholar  

Stentz A et al (1995) The focussed d* algorithm for real-time replanning. IJCAI 95:1652–1659

Floyd RW (1962) Algorithm 97: shortest path. Commun ACM 5(6):345. https://doi.org/10.1145/367766.368168

Martinez-Sala AS, Losilla F, Sánchez-Aarnoutse JC, García-Haro J (2015) Design, implementation and evaluation of an indoor navigation system for visually impaired people. Sensors 15(12):32168–32187. https://doi.org/10.3390/s151229912

Katzschmann RK, Araki B, Rus D (2018) Safe local navigation for visually impaired users with a time-of-flight and haptic feedback device. IEEE Trans Neural Syst Rehabil Eng 26(3):583–593. https://doi.org/10.1109/TNSRE.2018.2800665

Fallah N, Apostolopoulos I, Bekris K, Folmer E (2013) Indoor human navigation systems: a survey. Interact Comput 25(1):21–33. https://doi.org/10.1093/iwc/iws010

Harle R (2013) A survey of indoor inertial positioning systems for pedestrians. IEEE Commun Surv Tutor 15(3):1281–1293. https://doi.org/10.1109/SURV.2012.121912.00075

Davidson P, Piché R (2017) A survey of selected indoor positioning methods for smartphones. IEEE Commun Surv Tutor 19(2):1347–1370. https://doi.org/10.1109/COMST.2016.2637663

Hassan NU, Naeem A, Pasha MA, Jadoon T, Yuen C (2015) Indoor positioning using visible led lights: a survey. ACM Comput Surv 48(2):20–12032. https://doi.org/10.1145/2835376

Li B, Muñoz JP, Rong X, Chen Q, Xiao J, Tian Y, Arditi A, Yousuf M (2019) Vision-based mobile indoor assistive navigation aid for blind people. IEEE Trans Mob Comput 18(3):702–714. https://doi.org/10.1109/TMC.2018.2842751

Cabaret L, Lacassagne L (2014) What is the world’s fastest connected component labeling algorithm? In: 2014 IEEE workshop on signal processing systems (SiPS), pp 1–6. https://doi.org/10.1109/SiPS.2014.6986069

Rong X, Li B, Munoz JP, Xiao J, Arditi A, Tian Y (2016) Guided text spotting for assistive blind navigation in unfamiliar indoor environments. In: International symposium on visual computing, Springer, pp 11–22

Tian Y, Yang X, Yi C, Arditi A (2013) Toward a computer vision-based wayfinding aid for blind persons to access unfamiliar indoor environments. Mach Vis Appl 24(3):521–535. https://doi.org/10.1007/s00138-012-0431-7

Lee YH, Medioni G (2016) RGB-D camera based wearable navigation system for the visually impaired. Comput Vis Image Underst 149, 3–20, Special issue on Assistive Computer Vision and Robotics–Assistive Solutions for Mobility. Communication and HMI. https://doi.org/10.1016/j.cviu.2016.03.019

Huang AS, Bachrach A, Henry P, Krainin M, Maturana D, Fox D, Roy N (2017) Visual odometry and mapping for autonomous flight using an RGB-D camera. In: Robotics research, Springer, pp 235–252

Labbé M, Michaud F (2014) Online global loop closure detection for large-scale multi-session graph-based slam. In: 2014 IEEE/RSJ international conference on intelligent robots and systems, pp 2661–2666 . https://doi.org/10.1109/IROS.2014.6942926

McDonald J, Kaess M, Cadena C, Neira J, Leonard JJ (2011) 6-dof multi-session visual slam using anchor nodes. In: European conference on mobile robots (ECMR), pp 69–76. http://mural.maynoothuniversity.ie/6497/

Garcia G, Nahapetian A (2015) Wearable computing for image-based indoor navigation of the visually impaired. In: Proceedings of the conference on wireless health. WH ’15, ACM, New York, NY, USA, pp 17–1176. https://doi.org/10.1145/2811780.2811959

Manlises C, Yumang A, Marcelo M, Adriano A, Reyes J (2016) Indoor navigation system based on computer vision using camshift and d* algorithm for visually impaired. In: 2016 6th IEEE international conference on control system, computing and engineering (ICCSCE), pp 481–484. https://doi.org/10.1109/ICCSCE.2016.7893623

Bai J, Liu D, Su G, Fu Z (2017) A cloud and vision-based navigation system used for blind people. In: Proceedings of the 2017 international conference on artificial intelligence, automation and control technologies. AIACT ’17, ACM, New York, NY, USA, pp 22–1226. https://doi.org/10.1145/3080845.3080867

Chen X, Liu X, Wang Y, Gales MJF, Woodland PC (2016) Efficient training and evaluation of recurrent neural network language models for automatic speech recognition. IEEE/ACM Trans Audio Speech Lang Process 24(11):2146–2157. https://doi.org/10.1109/TASLP.2016.2598304

Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. arXiv preprint arXiv:1406.1078

Liang M, Hu X (2015) Recurrent convolutional neural network for object recognition. In: The IEEE conference on computer vision and pattern recognition (CVPR)

Farabet C, Couprie C, Najman L, LeCun Y (2013) Learning hierarchical features for scene labeling. IEEE Trans Pattern Anal Mach Intell 35(8):1915–1929. https://doi.org/10.1109/TPAMI.2012.231

Noh H, Hong S, Han B (2015) Learning deconvolution network for semantic segmentation. In: The IEEE international conference on computer vision (ICCV)

Akbani O, Gokrani A, Quresh M, Khan FM, Behlim SI, Syed TQ (2015) Character recognition in natural scene images. In: 2015 international conference on information and communication technologies (ICICT), pp 1–6. https://doi.org/10.1109/ICICT.2015.7469575

Neumann L, Matas J (2016) Real-time lexicon-free scene text localization and recognition. IEEE Transa Pattern Anal Mach Intell 38(9):1872–1885. https://doi.org/10.1109/TPAMI.2015.2496234

Shao K-Y, Gao Y, Wang N, Zhang H-Y, Li F, Li W-C (2010) Paper money number recognition based on intersection change. In: Third international workshop on advanced computational intelligence, pp 533–536. https://doi.org/10.1109/IWACI.2010.5585167

Weber M, Wolf P, Zöllner JM (2016) Deeptlr: a single deep convolutional network for detection and classification of traffic lights. In: 2016 IEEE intelligent vehicles symposium (IV), pp 342–348. https://doi.org/10.1109/IVS.2016.7535408

Athira SV, George M, Jose BR, Mathew J (2017) A global image descriptor based navigation system for indoor environment. Procedia Comput Sci 115:466–473. https://doi.org/10.1016/j.procs.2017.09.086

Pearson J, Robinson S, Jones M (2017) Bookmark: appropriating existing infrastructure to facilitate scalable indoor navigation. Int J Hum Comput Stud 103:22–34. https://doi.org/10.1016/j.ijhcs.2017.02.001

Li L, Xu Q, Chandrasekhar V, Lim J, Tan C, Mukawa MA (2017) A wearable virtual usher for vision-based cognitive indoor navigation. IEEE Trans Cybern 47(4):841–854. https://doi.org/10.1109/TCYB.2016.2530407

Dong J, Noreikis M, Xiao Y, Ylä-Jääski A (2018) Vinav: a vision-based indoor navigation system for smartphones. IEEE Trans Mob Comput 18(6):1461–1475

Rahman Su, Ullah S, Ullah S (2019) A mobile camera based navigation system for visually impaired people. In: Proceedings of the 7th international conference on communications and broadband networking, pp 63–66

Kunhoth J, Karkar A, Al-Maadeed S, Al-Attiyah A (2019) Comparative analysis of computer-vision and ble technology based indoor navigation systems for people with visual impairments. Int J Health Geogr 18(1):29

Tyukin A, Priorov A, Lebedev I (2016) Research and development of an indoor navigation system based on the digital processing of video images. Pattern Recogn Image Anal 26(1):221–230. https://doi.org/10.1134/S1054661816010260

Bista SR, Giordano PR, Chaumette F (2016) Appearance-based indoor navigation by IBVS using line segments. IEEE Robot Autom Lett 1(1):423–430. https://doi.org/10.1109/LRA.2016.2521907

Akinlar C, Topal C (2011) Edlines: a real-time line segment detector with a false detection control. Pattern Recogn Lett 32(13):1633–1642. https://doi.org/10.1016/j.patrec.2011.06.001

Zhang L, Koch R (2013) An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J Vis Commun Image Represent 24(7):794–805. https://doi.org/10.1016/j.jvcir.2013.05.006

Tian Y, Yang X, Arditi A (2010) Computer vision-based door detection for accessibility of unfamiliar environments to blind persons. In: International conference on computers for handicapped persons, Springer, pp 263–270

Costa P, Fernandes H, Martins P, Barroso J, Hadjileontiadis LJ (2012) Obstacle detection using stereo imaging to assist the navigation of visually impaired people. Procedia Comput Sci 14, 83–93. In: Proceedings of the 4th international conference on software development for enhancing accessibility and fighting info-exclusion (DSAI 2012). https://doi.org/10.1016/j.procs.2012.10.010

Murillo AC, Gutiérrez-Gómez D, Rituerto A, Puig L, Guerrero JJ (2012) Wearable omnidirectional vision system for personal localization and guidance. In: 2012 IEEE computer society conference on computer vision and pattern recognition workshops, pp 8–14. https://doi.org/10.1109/CVPRW.2012.6239189

Huang Z, Gu N, Hao J, Shen J (2018) 3DLoC: 3D features for accurate indoor positioning. Proc ACM Interact Mob Wearable Ubiquitous Technol 1(4):141–114126. https://doi.org/10.1145/3161409

Lee DC, Hebert M, Kanade T (2009) Geometric reasoning for single image structure recovery. In: 2009 IEEE conference on computer vision and pattern recognition, pp 2136–2143. https://doi.org/10.1109/CVPR.2009.5206872

Wang E, Yan W (2014) iNavigation: an image based indoor navigation system. Multimed Tools Appl 73(3):1597–1615. https://doi.org/10.1007/s11042-013-1656-9

Kawaji H, Hatada K, Yamasaki T, Aizawa K (2010) Image-based indoor positioning system: fast image matching using omnidirectional panoramic images. In: Proceedings of the 1st ACM international workshop on multimodal pervasive video analysis. MPVA ’10, ACM, New York, NY, USA, pp 1–4. https://doi.org/10.1145/1878039.1878041

Ke Y, Sukthankar R et al (2004) PCA-sift: a more distinctive representation for local image descriptors. CVPR 2(4):506–513

Deniz O, Paton J, Salido J, Bueno G, Ramanan J (2014) A vision-based localization algorithm for an indoor navigation app. In: 2014 eighth international conference on next generation mobile apps, services and technologies, pp 7–12. https://doi.org/10.1109/NGMAST.2014.18

Xiao A, Chen R, Li D, Chen Y, Wu D (2018) An indoor positioning system based on static objects in large indoor scenes by using smartphone cameras. Sensors. https://doi.org/10.3390/s18072229

Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: Towards real-time object detection with region proposal networks. In: Cortes, C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems, Curran Associates, Inc. 28, pp 91–99. http://papers.nips.cc/paper/5638-faster-r-cnn-towards-real-time-object-detection-with-region-proposal-networks.pdf . Accessed 8 Nov 2019

Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

Chen Y, Chen R, Liu M, Xiao A, Wu D, Zhao S (2018) Indoor visual positioning aided by cnn-based image retrieval: training-free, 3D modeling-free. Sensors. https://doi.org/10.3390/s18082692

Rublee E, Rabaud V, Konolige K, Bradski GR (2011) Orb: an efficient alternative to sift or surf. In: ICCV, Citeseer, vol 11, p 2

Handa A, Whelan T, McDonald J, Davison AJ (2014) A benchmark for RGB-D visual odometry, 3d reconstruction and slam. In: 2014 IEEE international conference on robotics and automation (ICRA), pp 1524–1531. https://doi.org/10.1109/ICRA.2014.6907054

Sturm J, Magnenat S, Engelhard N, Pomerleau F, Colas F, Cremers D, Siegwart R, Burgard W (2011) Towards a benchmark for RGB-D SLAM evaluation. In: RGB-D workshop on advanced reasoning with depth cameras at robotics: science and systems conf. (RSS), Los Angeles, United States. https://hal.archives-ouvertes.fr/hal-01142608

Kendall A, Grimes M, Cipolla R (2015) Posenet: a convolutional network for real-time 6-DOF camera relocalization. In: The IEEE international conference on computer vision (ICCV), pp 2938–2946

Guo F, He Y, Guan L (2017) RGB-D camera pose estimation using deep neural network. In: 2017 IEEE global conference on signal and information processing (GlobalSIP), pp 408–412. https://doi.org/10.1109/GlobalSIP.2017.8308674

Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

Adorno J, DeLaHoz Y, Labrador MA (2016) Smartphone-based floor detection in unstructured and structured environments. In: 2016 IEEE international conference on pervasive computing and communication workshops (PerCom workshops), pp 1–6. https://doi.org/10.1109/PERCOMW.2016.7457136

Bashiri FS, LaRose E, Badger JC, D’Souza RM, Yu Z, Peissig P (2018) Object detection to assist visually impaired people: a deep neural network adventure. In: International symposium on visual computing, Springer, pp 500–510

Bashiri FS, LaRose E, Peissig P, Tafti AP (2018) Mcindoor20000: a fully-labeled image dataset to advance indoor objects detection. Data Brief 17:71–75

Jayakanth K (2019) Comparative analysis of texture features and deep learning method for real-time indoor object recognition. In: 2019 international conference on communication and electronics systems (ICCES), IEEE, pp 1676–1682

Afif M, Ayachi R, Said Y, Pissaloux E, Atri M (2020) An evaluation of retinanet on indoor object detection for blind and visually impaired persons assistance navigation. Neural Process Lett. https://doi.org/10.1007/s11063-020-10197-9

Takács M, Bencze T, Szabó-Resch MZ, Vámossy Z (2015) Object recognition to support indoor robot navigation. In: 2015 16th IEEE international symposium on computational intelligence and informatics (CINTI), pp 239–242. https://doi.org/10.1109/CINTI.2015.7382930

Wang Y, Ma X, Leus G (2011) Robust time-based localization for asynchronous networks. IEEE Trans Signal Process 59(9):4397–4410. https://doi.org/10.1109/TSP.2011.2159215

Zhang D, Xia F, Yang Z, Yao L, Zhao W (2010) Localization technologies for indoor human tracking. In: 2010 5th international conference on future information technology, pp 1–6. https://doi.org/10.1109/FUTURETECH.2010.5482731

Maccabe AB, Mielke AM, Brennan SM, Torney DC (2004) Radiation detection with distributed sensor networks. Computer 34(08):57–59. https://doi.org/10.1109/MC.2004.103

Werner M (2014) Indoor location-based services: prerequisites and foundations. Springer, Berlin

Book   Google Scholar  

Guo Y, Wang W, Chen X (2017) FreeNavi: Landmark-based mapless indoor navigation based on wifi fingerprints. In: 2017 IEEE 85th vehicular technology conference (VTC Spring), pp 1–5. https://doi.org/10.1109/VTCSpring.2017.8108350

Chen X, Kong J, Guo Y, Chen X (2014) An empirical study of indoor localization algorithms with densely deployed aps. In: 2014 IEEE global communications conference, pp 517–522. https://doi.org/10.1109/GLOCOM.2014.7036860

Han B, Zhao L (2017) An indoor positioning and navigation technique based on wi-fi fingerprint and environment information. In: China satellite navigation conference, Springer, pp 381–393

Wu C, Xu J, Yang Z, Lane ND, Yin Z (2017) Gain without pain: accurate wifi-based localization using fingerprint spatial gradient. Proc ACM Interact Mob Wearable Ubiquitous Technol 1(2):29–12919. https://doi.org/10.1145/3090094

Liu H, Darabi H, Banerjee P, Liu J (2007) Survey of wireless indoor positioning techniques and systems. IEEE Trans Syst Man Cybern Part C 37(6):1067–1080

Dayekh S, Affes S, Kandil N, Nerguizian C (2010) Cooperative localization in mines using fingerprinting and neural networks. In: 2010 IEEE wireless communication and networking conference, IEEE, pp 1–6

Zhang W, Liu K, Zhang W, Zhang Y, Gu J (2016) Deep neural networks for wireless localization in indoor and outdoor environments. Neurocomputing 194:279–287

Félix G, Siller M, Alvarez EN (2016) A fingerprinting indoor localization algorithm based deep learning. In: 2016 eighth international conference on ubiquitous and future networks (ICUFN), IEEE, pp 1006–1011

Jang J-W, Hong S-N (2018) Indoor localization with wifi fingerprinting using convolutional neural network. In: 2018 tenth international conference on ubiquitous and future networks (ICUFN), IEEE, pp 753–758

Mittal A, Tiku S, Pasricha S (2018) Adapting convolutional neural networks for indoor localization with smart mobile devices. In: Proceedings of the 2018 on great lakes symposium on VLSI, pp 117–122

Ibrahim M, Torki M, ElNainay M (2018) CNN based indoor localization using RSS time-series. In: 2018 IEEE symposium on computers and communications (ISCC), IEEE, pp 01044–01049

Li W, Chen Z, Gao X, Liu W, Wang J (2019) Multimodel framework for indoor localization under mobile edge computing environment. IEEE Internet Things J 6(3):4844–4853

Wei Y, Li W, Chen T (2016) Node localization algorithm for wireless sensor networks using compressive sensing theory. Pers Ubiquitous Comput 20(5):809–819

Liu C, Yao X, Luo J (2019) Multiregional secure localization using compressive sensing in wireless sensor networks. ETRI J 41(6):739–749

Zhang P, Wang J, Li W (2020) A learning based joint compressive sensing for wireless sensing networks. Comput Netw 168:107030

Li Y, Zhuang Y, Lan H, Zhou Q, Niu X, El-Sheimy N (2016) A hybrid wifi/magnetic matching/pdr approach for indoor navigation with smartphone sensors. IEEE Commun Lett 20(1):169–172. https://doi.org/10.1109/LCOMM.2015.2496940

Ren H, Kazanzides P (2012) Investigation of attitude tracking using an integrated inertial and magnetic navigation system for hand-held surgical instruments. IEEE/ASME Trans Mechatron 17(2):210–217. https://doi.org/10.1109/TMECH.2010.2095504

Huang C, Liao Z, Zhao L (2010) Synergism of INS and PDR in self-contained pedestrian tracking with a miniature sensor module. IEEE Sens J 10(8):1349–1359. https://doi.org/10.1109/JSEN.2010.2044238

Wu X, Shen R, Fu L, Tian X, Liu P, Wang X (2017) iBill: using ibeacon and inertial sensors for accurate indoor localization in large open areas. IEEE Access 5:14589–14599. https://doi.org/10.1109/ACCESS.2017.2726088

Betters E (2013) Apple’s ibeacons explained: What it is and why it matters. online publication dated Sep 18, 1–14

Shu Y, Bo C, Shen G, Zhao C, Li L, Zhao F (2015) Magicol: indoor localization using pervasive magnetic field and opportunistic wifi sensing. IEEE J Sel Areas Commun 33(7):1443–1457. https://doi.org/10.1109/JSAC.2015.2430274

Lee K, Nam Y, Min SD (2018) An indoor localization solution using bluetooth rssi and multiple sensors on a smartphone. Multimed Tools Appl 77(10):12635–12654. https://doi.org/10.1007/s11042-017-4908-2

Satan A (2018) Bluetooth-based indoor navigation mobile system. In: 2018 19th international carpathian control conference (ICCC), pp 332–337. https://doi.org/10.1109/CarpathianCC.2018.8399651

Satan A, Toth Z (2018) Development of bluetooth based indoor positioning application. In: 2018 IEEE international conference on future IoT technologies (Future IoT), pp 1–6. https://doi.org/10.1109/FIOT.2018.8325586

Davis J (2015) Indoor wireless RF channels. http://wireless.per.nl/reference/chaptr03/indoor.html . Accessed 10 May 2019

Yu N, Zhan X, Zhao S, Wu Y, Feng R (2018) A precise dead reckoning algorithm based on bluetooth and multiple sensors. IEEE Internet Things J 5(1):336–351. https://doi.org/10.1109/JIOT.2017.2784386

Campana F, Pinargote A, Domínguez F, Peláez E (2017) Towards an indoor navigation system using bluetooth low energy beacons. In: 2017 IEEE second ecuador technical chapters meeting (ETCM), pp 1–6. https://doi.org/10.1109/ETCM.2017.8247464

AL-Madani B, Orujov F, R Maskeliūnas, Damaševičius R, Venčkauskas A (2019) Fuzzy logic type-2 based wireless indoor localization system for navigation of visually impaired people in buildings. Sensors 19(9):2114

Murata M, Ahmetovic D, Sato D, Takagi H, Kitani KM, Asakawa C (2019) Smartphone-based localization for blind navigation in building-scale indoor environments. Pervasive Mob Comput 57:14–32

Ahmetovic D, Gleason C, Ruan C, Kitani K, Takagi H, Asakawa C (2016) Navcog: a navigational cognitive assistant for the blind. In: Proceedings of the 18th international conference on human-computer interaction with mobile devices and services, ACM, pp 90–99

Kim J-E, Bessho M, Kobayashi S, Koshizuka N, Sakamura K (2016) Navigating visually impaired travelers in a large train station using smartphone and bluetooth low energy. In: Proceedings of the 31st annual ACM symposium on applied computing, ACM, pp 604–611

Cheraghi SA, Namboodiri V, Walker L (2017) Guidebeacon: beacon-based indoor wayfinding for the blind, visually impaired, and disoriented. In: 2017 IEEE international conference on pervasive computing and communications (PerCom), IEEE, pp 121–130

Bilgi S, Ozturk O, Gulnerman AG (2017) Navigation system for blind, hearing and visually impaired people in ITU ayazaga campus. In: 2017 international conference on computing networking and informatics (ICCNI), pp 1–5

Abu Doush I, Alshatnawi S, Al-Tamimi A-K, Alhasan B, Hamasha S (2016) ISAB: integrated indoor navigation system for the blind. Interact Comput 29(2):181–202. https://doi.org/10.1093/iwc/iww016

Ganz A, Schafer J, Gandhi S, Puleo E, Wilson C, Robertson M (2012) Percept indoor navigation system for the blind and visually impaired: architecture and experimentation. Int J Telemed Appl 2012:19–191919. https://doi.org/10.1155/2012/894869

Ganz A, Schafer JM, Tao Y, Wilson C, Robertson M (2014) Percept-II: Smartphone based indoor navigation system for the blind. In: 2014 36th annual international conference of the IEEE engineering in medicine and biology society, pp 3662–3665. https://doi.org/10.1109/EMBC.2014.6944417

Tsirmpas C, Rompas A, Fokou O, Koutsouris D (2015) An indoor navigation system for visually impaired and elderly people based on radio frequency identification (RFID). Inf Sci 320:288–305. https://doi.org/10.1016/j.ins.2014.08.011

Lin Qiongzheng, Guo Y (2016) Accurate indoor navigation system using human-item spatial relation. Tsinghua Sci Technol 21(5):521–537. https://doi.org/10.1109/TST.2016.7590321

Loconsole C, Dehkordi MB, Sotgiu E, Fontana M, Bergamasco M, Frisoli A (2016) An IMU and RFID-based navigation system providing vibrotactile feedback for visually impaired people. In: International conference on human haptic sensing and touch enabled computer applications, Springer, pp 360–370

Xu H, Ding Y, Li P, Wang R, Li Y (2017) An RFID indoor positioning algorithm based on bayesian probability and k-nearest neighbor. Sensors. https://doi.org/10.3390/s17081806

Ganti D, Zhang W, Kavehrad M (2014) VLC-based indoor positioning system with tracking capability using Kalman and particle filters. In: 2014 IEEE international conference on consumer electronics (ICCE), pp 476–477. https://doi.org/10.1109/ICCE.2014.6776093

Jayakody A, Meegama CI, Pinnawalage HU, Muwenwella RMHN, Dalpathado SC (2016) AVII [assist vision impaired individual]: an intelligent indoor navigation system for the vision impaired individuals with vlc. In: 2016 IEEE international conference on information and automation for sustainability (ICIAfS), pp 1–6. https://doi.org/10.1109/ICIAFS.2016.7946526

Nakajima M (2013) New indoor navigation system for visually impaired people using visible light communication. EURASIP J Wirel Commun Netw 1:37. https://doi.org/10.1186/1687-1499-2013-37

Fan Q, Sun B, Sun Y, Zhuang X (2017) Performance enhancement of MEMS-based INS/UWB integration for indoor navigation applications. IEEE Sens J 17(10):3116–3130. https://doi.org/10.1109/JSEN.2017.2689802

Hsu H-H, Chang J-K, Peng W-J, Shih TK, Pai T-W, Man KL (2018) Indoor localization and navigation using smartphone sensory data. Ann Oper Res 265(2):187–204. https://doi.org/10.1007/s10479-017-2398-2

Hasan MA, Mishuk MN (2018) Mems IMU based pedestrian indoor navigation for smart glass. Wirel Pers Commun 101(1):287–303. https://doi.org/10.1007/s11277-018-5688-3

Ju H, Park SY, Park CG (2018) A smartphone-based pedestrian dead reckoning system with multiple virtual tracking for indoor navigation. IEEE Sens J 18(16):6756–6764. https://doi.org/10.1109/JSEN.2018.2847356

Shin SH, Park CG, Choi S (2010) New map-matching algorithm using virtual track for pedestrian dead reckoning. ETRI J 32(6):891–900

Hsu Y, Wang J, Chang C (2017) A wearable inertial pedestrian navigation system with quaternion-based extended kalman filter for pedestrian localization. IEEE Sens J 17(10):3193–3206. https://doi.org/10.1109/JSEN.2017.2679138

Giorgi G, Frigo G, Narduzzi C (2017) Dead reckoning in structured environments for human indoor navigation. IEEE Sens J 17(23):7794–7802. https://doi.org/10.1109/JSEN.2017.2725446

Huang H-Y, Hsieh C-Y, Liu K-C, Cheng H-C, Hsu SJ, Chan C-T (2019) Multi-sensor fusion approach for improving map-based indoor pedestrian localization. Sensors 19(17):3786

Luo J, Zhang C, Wang C (2020) Indoor multi-floor 3D target tracking based on the multi-sensor fusion. IEEE Access 8:36836–36846

Poulose A, Eyobu OS, Han DS (2019) A combined PDR and wi-fi trilateration algorithm for indoor localization. In: 2019 international conference on artificial intelligence in information and communication (ICAIIC), IEEE, pp 072–077

Qiu S, Wang Z, Zhao H, Qin K, Li Z, Hu H (2018) Inertial/magnetic sensors based pedestrian dead reckoning by means of multi-sensor fusion. Inf Fus 39:108–119

Kuang J, Niu X, Zhang P, Chen X (2018) Indoor positioning based on pedestrian dead reckoning and magnetic field matching for smartphones. Sensors 18(12):4142

Ciabattoni L, Foresi G, Monteriù A, Pepa L, Pagnotta DP, Spalazzi L, Verdini F (2019) Real time indoor localization integrating a model based pedestrian dead reckoning on smartphone and BLE beacons. J Ambient Intell Humaniz Comput 10(1):1–12

Yu S-J, Jan S-S, De Lorenzo DS (2018) Indoor navigation using wi-fi fingerprinting combined with pedestrian dead reckoning. In: 2018 IEEE/ION position, location and navigation symposium (PLANS), IEEE, pp 246–253

Liu Z, Dai W, Win MZ (2018) Mercury: an infrastructure-free system for network localization and navigation. IEEE Trans Mob Comput 17(5):1119–1133. https://doi.org/10.1109/TMC.2017.2725265

Madgwick SOH, Harrison AJL, Vaidyanathan R (2011) Estimation of IMU and marg orientation using a gradient descent algorithm. In: 2011 IEEE international conference on rehabilitation robotics, pp 1–7. https://doi.org/10.1109/ICORR.2011.5975346

Peng C, Shen G, Zhang Y, Li Y, Tan K (2007) Beepbeep: a high accuracy acoustic ranging system using cots mobile devices. In: Proceedings of the 5th international conference on embedded networked sensor systems. SenSys ’07, ACM, New York, NY, USA, pp 1–14. https://doi.org/10.1145/1322263.1322265

Xiao Z, Wen H, Markham A, Trigoni N (2015) Indoor tracking using undirected graphical models. IEEE Trans Mob Comput 14(11):2286–2301. https://doi.org/10.1109/TMC.2015.2398431

Hilsenbeck S, Bobkov D, Schroth G, Huitl R, Steinbach E (2014) Graph-based data fusion of pedometer and wifi measurements for mobile indoor positioning. In: Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing. UbiComp ’14, ACM, New York, NY, USA, pp 147–158. https://doi.org/10.1145/2632048.2636079

Mendoza-Silva GM, Torres-Sospedra J, Huerta J (2017) A more realistic error distance calculation for indoor positioning systems accuracy evaluation. In: 2017 international conference on indoor positioning and indoor navigation (IPIN), pp 1–8. https://doi.org/10.1109/IPIN.2017.8115950

Cai C, Zheng R, Li J, Zhu L, Pu H, Hu M (2019) Asynchronous acoustic localization and tracking for mobile targets. IEEE Internet Things J

Wu H, Mo Z, Tan J, He S, Chan S-HG (2019) Efficient indoor localization based on geomagnetism. ACM Trans Sens Netw 15(4):1–25

Liu M, Cheng L, Qian K, Wang J, Wang J, Liu Y (2020) Indoor acoustic localization: a survey. Hum-Centric Comput Inf Sci 10(1):2

Download references

Acknowledgements

This publication was supported by a Qatar University Collaborative High Impact Grant QUHI-CENG-18/19-1. The findings achieved herein are solely the responsibility of the authors. The contents of this publication are solely the responsibility of the authors and do not necessarily represent the official views of Qatar University.

Author information

Authors and affiliations.

Department of Computer Science and Engineering, Qatar University, Al Jamiaa Street, Doha, Qatar

Jayakanth Kunhoth, AbdelGhani Karkar, Somaya Al-Maadeed & Abdulla Al-Ali

You can also search for this author in PubMed   Google Scholar

Contributions

JK and AK carried out the work and drafted manuscript. SA supervised the work and reviewed the manuscript. AA reviewed the manuscript and provide assistance to improve it. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Jayakanth Kunhoth .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kunhoth, J., Karkar, A., Al-Maadeed, S. et al. Indoor positioning and wayfinding systems: a survey. Hum. Cent. Comput. Inf. Sci. 10 , 18 (2020). https://doi.org/10.1186/s13673-020-00222-0

Download citation

Received : 27 December 2019

Accepted : 04 April 2020

Published : 02 May 2020

DOI : https://doi.org/10.1186/s13673-020-00222-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Indoor navigation
  • Indoor positioning
  • Computer vision
  • Visible lights

research paper on indoor positioning system

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sensors (Basel)

Logo of sensors

Collaborative Indoor Positioning Systems: A Systematic Review

Pavel pascacio.

1 Institute of New Imaging Technologies, Universitat Jaume I, 12006 Castellón, Spain; [email protected] (S.C.); se.iju@serrotj (J.T.-S.)

2 Electrical Engineering Unit, Tampere University, 33014 Tampere, Finland; [email protected] (E.S.L.); [email protected] (J.N.)

Sven Casteleyn

Joaquín torres-sospedra.

3 UBIK Geospatial Solutions S.L., 12006 Castellón, Spain

Elena Simona Lohan

Associated data.

Not applicable.

Research and development in Collaborative Indoor Positioning Systems (CIPSs) is growing steadily due to their potential to improve on the performance of their non-collaborative counterparts. In contrast to the outdoors scenario, where Global Navigation Satellite System is widely adopted, in (collaborative) indoor positioning systems a large variety of technologies, techniques, and methods is being used. Moreover, the diversity of evaluation procedures and scenarios hinders a direct comparison. This paper presents a systematic review that gives a general view of the current CIPSs. A total of 84 works, published between 2006 and 2020, have been identified. These articles were analyzed and classified according to the described system’s architecture, infrastructure, technologies, techniques, methods, and evaluation. The results indicate a growing interest in collaborative positioning, and the trend tend to be towards the use of distributed architectures and infrastructure-less systems. Moreover, the most used technologies to determine the collaborative positioning between users are wireless communication technologies (Wi-Fi, Ultra-WideBand, and Bluetooth). The predominant collaborative positioning techniques are Received Signal Strength Indication, Fingerprinting, and Time of Arrival/Flight, and the collaborative methods are particle filters, Belief Propagation, Extended Kalman Filter, and Least Squares. Simulations are used as the main evaluation procedure. On the basis of the analysis and results, several promising future research avenues and gaps in research were identified.

1. Introduction

The advent of mobile computing, including Internet of Things (IoT) and wearable devices, has changed the traditional scope of positioning systems, which moved from military tracking and civilian navigation to location information [ 1 ]. Location information is a key element to bridge the gap between the physical and the digital world, either for personal [ 2 , 3 , 4 , 5 ] or industrial use [ 6 , 7 , 8 , 9 ].

The new generation of smart applications with location-based service (LBSs) are part of our daily lives. They help us to get the best route to our work place using crowdsourced information [ 10 ], find a restaurant based on our preferences [ 11 ] or, even, remember preventive measures in risky situations for our health [ 12 ]. Undoubtedly, precise positioning plays a key role in LBS.

In outdoor environments, Global Navigation Satellite System (GNSSs)—e.g., Global Positioning System (GPS), Globalnaya Navigazionnaya Sputnikovaya Sistema (GLONASS), Galileo, and BeiDou [ 13 ]—are widely adopted for global positioning purposes [ 14 , 15 ]; i.e., GNSS is supposed to provide accurate positioning anywhere on Earth. The GPS accuracy, in terms of positioning error, in smartphones is usually within a 4.9 m radius in clear open sky conditions, but it is capable of centimeter accuracy when it is used in combination with dual-frequency receivers and/or augmentation systems [ 16 , 17 ]. Despite the high accuracy and global coverage provided by GNSSs, they cannot properly operate indoors. The strong signal attenuation, the presence of heavy signal multipath, and other sources of interference invalidate GNSS as a positioning solution indoors [ 14 , 18 ].

In contrast to outdoor environments, indoor environments present diverse and dynamic scenarios with complex geometries. The indoor environments are heterogeneous and include homes, offices, warehouses, hospitals, and shopping malls, among many others. Furthermore, the applications for end-users are diverse and, therefore, they have different accuracy and coverage requirements. As pointed out by Mautz [ 19 ], Ambient Assisted Living (AAL) applications require room-level coverage with accuracy below 1 m, whereas law-enforcement applications have urban/rural coverage with accuracy of a few meters. Therefore, the particular characteristics of the indoor scenarios and the diversity of the applications has made that no single Indoor Position System (IPS) has emerged as a universal solution.

The available indoor solutions are highly coupled to the environment and target application. We can find, for instance, smart home systems designed to help us locate misplaced objects using 802.15.4a [ 20 ]; systems to monitor the daily activities of seniors at home using smartwatches and IEEE 802.11 Wireless LAN (Wi-Fi) fingerprinting [ 21 ]; or remote patient monitoring with ZigBee [ 22 ]. Not only the coverage and accuracy are important to select the base positioning technology, the deployment and maintenance costs are also relevant. The diverse solutions are a clear indicator that there is no single alternative for GNSS indoors, and different positioning technologies co-exist.

In addition to specific requirements of the scenarios and applications, the IPSs must also cover different requirements depending on the kind of actors of the system, which include aerial robots [ 23 , 24 ], mobile terrestrial robots [ 25 , 26 , 27 ], and humans [ 28 , 29 ]. These actors present diverse needs, as robots require accurate positioning to achieve safe autonomous operation, whereas the IPSs focused on human tasks are not required to perform control actions. Furthermore, in contrast to IPSs for robots, those for humans are usually restricted to devices already in use by the user (e.g., smartphone, smart watch), which inherently imposes battery and computational power constraints. In this article we focus on Collaborative Indoor Positioning System (CIPS) for humans.

Advanced solutions based on the combination of multiple positioning technologies have also been widely used [ 30 , 31 , 32 , 33 ]. For instance, ref. [ 34 ] introduce an application that combines Bluetooth Low Energy (BLE) and Pedestrian Dead Reckoning (PDR) with a particle filter with the purpose of guiding people with visual impairments. As stated in [ 32 ], sensor fusion efficiently combines data from disparate sources (e.g., sensors) to generate better information than that reported by the original sources individually. Combining multiple sensors for positioning can minimize their constraints: low frequency in Wi-Fi, unpredictable external disturbances affecting the magnetometer [ 35 ], fluctuation on the barometer [ 36 ], drifts in gyroscope [ 37 ], or random noise and bias present in Microelectro-Mechanical System (MEMS) and Inertial Measurement Units (IMUs) [ 38 ].

Within IPSs, collaborative positioning has become relevant in the last years. CIPSs might be considered the evolution of sensor fusion, as they also combine data from multiple sources. As a differentiating factor, CIPSs rely on various independent actors who share sensing information, conveying key positioning data from heterogeneous sensors, to enable the positioning of every actor and improve it along different dimensions [ 39 ].

CIPSs present some advantages over the conventional IPSs approaches. They expand the coverage area of stand-alone IPSs by sharing the position of users [ 40 , 41 , 42 ]. They reduce the use of expensive and/or complex positioning infrastructure while enhancing the position accuracy of users [ 43 , 44 , 45 ]. They reduce positioning ambiguities due to poor geometric location of anchors [ 46 , 47 ]. They also reduce positioning error in harsh and Non-line-of-sight (NLOS) environments using the surrounding users as auxiliary anchor nodes [ 48 , 49 , 50 ]. CIPSs have applied different technologies, techniques, and methods to address positioning and achieve the aforementioned advantages, yet a comprehensive overview of this emerging, diverse field is missing.

This paper introduces a systematic review on CIPSs. The review is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 51 ]. We identify, analyze, classify, and discuss the main findings on CIPSs reported in the scientific literature indexed in Scopus or Web of Science datasets. Despite the publication of several surveys and reviews related to IPSs [ 52 , 53 , 54 , 55 , 56 , 57 ], none of them focuses on collaborative approaches. This article therefore focuses on the following:

  • Systematically collecting and analyzing research works related to CIPSs;
  • Identifying and classifying the technologies, techniques, and methods applied;
  • Identifying and classifying the computation architectures and infrastructures required for positioning;
  • Identifying and describing the types of evaluation performed;
  • Analyzing and discussing the results, in order to provide an overview of CIPS, and to uncover trends, challenges, and gaps in this research field.

The remainder of this work is structured as follows. Section 2 presents the background in terms of IPS and related technologies, techniques, and methods. Section 3 describes the research methodology applied to conduct this systematic review. Section 4 graphically reports and analyzes the results. Section 5 further discusses the implications of the results in different dimensions. Additionally, the main limitations, current trends, and gaps are discussed. Finally, Section 6 summarizes the main findings and points out future work.

2. Background

In this section, we discuss relevant terminology, as well as existing classifications schemes for IPSs and their applied technologies, techniques, and methods. In addition, we present an overview of the CIPS, highlighting its advantages with respect to traditional IPS.

2.1. Indoor Positioning Systems

As its name indicates, an IPS is used to provide a position estimate in indoor environments. However, the design of an IPS highly depends on the context, and it is built on top of three main components. First, the base indoor positioning technology is the core of the IPS and will somehow be an indicator of the deployment’s context, i.e., the expected accuracy as well as any additional requirements and restrictions. In contrast to outdoor positioning, where synchronized, timestamped radio signals are transmitted from a constellation of satellites along a line of sight to the receiver, the indoor positioning technologies are of diverse nature and include well-known optical (e.g., Visible Light Communication (VLC)), radio frequency (e.g., Frequency Modulation (FM), Wi-Fi, BLE, among others), acoustic, and inertial measurement technologies. Second, the indoor positioning technique indicates what data/measurements or information are processed to calculate the position. For instance, the direction and angle from which a signal is received (Angle of Arrival (AoA)), the elapsed time of a signal from a transmitter to the receiver (Time of Arrival (ToA) and variants), the properties of the channel in a communication link (Channel State Information (CSI)), the strength of the signal at the receiver side (Received Signal Strength Indicator (RSSI)), or even the set of RSSIs from multiple emitters as a block (fingerprint) are mentioned. Third, and finally, the indoor positioning method is the particular algorithm used to process the data/measurements or information collected for positioning. In literature, a wide range of methods is described, from very particular variants of well-known algorithms (e.g., k -Nearest Neighbors ( k -NN)), to only vaguely outlined methods which are referred to by the technique they use (e.g., fingerprint-based method). In addition, the indoor positioning methods can be specific for a particular technology and technique (e.g., PDR for inertial measurements), or they can be universal algorithms (e.g., Machine Learning algorithms such as k -NN or Support Vector Machines). To sum up, even though an IPS can be relatively simple, such as applying the k -NN algorithm over fingerprints of Wi-Fi signals [ 21 , 58 , 59 ], most of the advanced systems are complex, e.g., applying Extended Kalman Filters (EKFs) or particle filters to combine PDR over IMU data and fingerprinting based on ble [ 34 , 60 ].

Based on the computational architecture, the IPS can be classified into two categories: server-based—as in Where@UM [ 61 ]—and server-less/stand-alone—as in AnyPlace [ 62 ]—which indicate where the position estimate is computed. In a server-based architecture, the server processes the raw data provided by each device, without using information of the other devices, i.e., all the localization estimations are carried out in a remote server. In the server-less architecture, each device acquires the raw relevant data from sensors and processes them to self-determine the position, i.e., all the localization estimations are carried out locally on the device. In both cases, the position of a device is estimated using the data and information provided by that device.

Regarding the infrastructure, literature generally distinguishes between infrastructure-less and infrastructure-based IPS [ 63 , 64 , 65 , 66 ]. The infrastructure-less systems do not require to deploy any infrastructure in the area to operate, e.g., IPSs based on magnetic field [ 66 ]. In contrast, infrastructure-based IPSs require an infrastructure to operate, i.e., one or more physical elements deployed in the environment [ 63 , 65 , 66 ] (e.g., ble beacons or ultrasound receivers). To differentiate the systems where infrastructure needs to be purposely deployed versus systems that use existing infrastructure (i.e., signals of opportunity), some authors identified an in-between class: opportunistic IPS [ 67 , 68 , 69 ]. For example, IPSs based on Wi-Fi are considered opportunistic, if the environment is not altered to allow their operation (i.e., no Wi-Fi Access Points (APs) are purposely deployed for the IPS). In this paper, we do not consider opportunistic approaches as a separate class. For detailed information on (non-collaborative) IPSs, we refer to the excellent recent reviews and surveys available in literature, such as [ 53 , 70 , 71 ].

2.2. Indoor Positioning Technologies

From a technological point of view, researchers have proposed a wide variety of solutions for indoor positioning in search of improved performance in various application scenarios. In the literature, technologies for indoor positioning have been widely described, classified, used, and evaluated [ 14 , 19 , 52 , 53 , 54 , 55 ]. Nevertheless, a unified classification of the technologies is still missing. For instance, ref. [ 55 ] categorize technologies into six groups based on the kind of signal used to measure the position, hereby only covering Wireless Personal Networks. Authors in [ 19 ] classify the technologies into thirteen sensor technologies, based on the underlying idea that the performance of systems with the same type of sensors can be easily compared; similarly [ 53 ], in their meta-review, identify and describe ten categories to cover the most common technologies used in IPSs based on the type of sensors used. Authors in [ 14 ] summarize the specifications and features of twenty positioning technologies encountered in their survey, and they provide a categorization of the most suitable positioning technology already available for LBS applications. Table 1 summarizes the aforementioned classifications.

Different indoor positioning technologies classification reported.

Gu et al. [ ] (2009)Mautz [ ] (2012)Basiri et al. [ ] (2017)Mendoza-Silva et al. [ ] (2019)

2.3. Indoor Positioning Techniques

The techniques applied to the IPS depend primarily on the technology used. Furthermore, the performance of the IPSs can vary dramatically depending on the type of technique applied, even when the technology and test conditions are identical. Therefore, in the literature, we find a significant number of works classifying and summarizing IPSs techniques and their features [ 14 , 52 , 53 , 54 , 55 ]. Similar as for technologies, techniques have been categorized from different points of view. Authors in Liu et al. [ 52 ] categorize techniques into three groups: the first group (Triangulation) is based on geometric properties to estimate the target position, which is divided in two subgroups (Lateration and Angulation), the Scene Analysis based on fingerprints measurements, and Proximity based on relative location information; Gu et al. [ 55 ] classify the techniques into four categories, and they add a new category (Vision Analysis) based on the image received by one or multiple points; Zafari et al. [ 54 ] in their classification do not create subgroups, and they present six techniques based on range measurements, one based on fingerprint, and a new one (Channel State information) based on the channel properties; Mendoza-Silva et al. [ 53 ] present a classification of four techniques based on the main three range measurements and AoA. Table 2 briefly presents a summary of some of the reported techniques.

Different indoor positioning technique classifications reported.

Liu et al. [ ] (2007)Gu et al. [ ] (2009)Zafari et al. [ ] (2019)Mendoza-Silva et al. [ ] (2019)

2.4. Indoor Positioning Methods

The methods (also termed algorithms) for indoor positioning are defined as detailed sequences to follow in order to compute the position of a target object [ 55 ] and are intrinsically linked to the type of technologies and techniques used. Several research works that summarize them have been published, either generally, or specifically for a certain technology and/or technique. For example, for the latter, He and Chan [ 56 ] summarizes and classifies the methods used in Wi-Fi Fingerprinting based IPS as probabilistic or deterministic; Chen et al. [ 66 ] classifies the localization methods based on received Wi-Fi signal strength into geometric-based and fingerprinting-based schemes; Güvenc and Chong [ 57 ] provides an overview of ToA-based localization methods and classifies them into methods for Line-of-sight (LOS) and Non-line-of-sight (NLOS) scenarios; in contrast, Yassin et al. [ 72 ] conducts a general overview of methods covering the basic non-collaborative positioning techniques (Triangulation, Scene Analysis, and Proximity).

It is relevant to highlight that all these classifications are based on the operational phase of the IPS, not on the data collection phase (i.e., the origin of the reference data is not a factor). For instance, RADAR [ 58 ], the first fingerprint method, can still be applied to novel systems, whether the reference data are collected by means of crowdsourcing [ 73 ], obtained after interpolating a reduced radio map [ 74 ], automatically generated from unlabeled samples [ 75 ], or artificially generated by means of an advanced path-loss model [ 76 ]. Table 3 briefly presents a summary of some of the reported methods.

Different indoor positioning method classifications reported.

Güvenc and Chong [ ] (2009) He and Chan [ ] (2016)Yassin et al. [ ] (2017)Chen et al. [ ] (2017)
In LOS scenarios In NLOS scenarios

2.5. Collaborative Indoor Positioning Systems

Considering the role of different actors in IPSs, the literature distinguishes two main types: non-collaborative and collaborative [ 48 , 50 , 77 , 78 ]. This terminology refers to the operational phase (i.e., estimating the position), not the (reference) data gathering phase (e.g., building a fingerprint radio map). As such, non-collaborative schemes refer to systems that do not consider the participation of other users in their positioning algorithm [ 78 ]. In contrast, a CIPS is a scheme in which the position is determined based on the direct or indirect interaction between neighboring devices or diverse IPS. Note that collaborative approaches should not be confused with data or sensor fusion approaches. Whereas collaborative positioning is focused in systems whose independent actors (users or devices) exchange information and compute relative distances between them to provide the position of the set of users [ 77 , 78 , 79 , 80 , 81 ], sensor fusion combines information from various sensors from a single actor for providing the position of a single user [ 30 , 31 , 32 , 33 ].

Technological advances and the development of techniques and methods developed for traditional IPS are largely reused by collaborative systems to determine the position of collaborative nodes in CIPS. However, CIPS take advantage of those technologies that not only allow to estimate the position but also to exchange information between nodes. Within those technologies, we can distinguish wireless technologies (e.g., Wi-Fi, BLE, Ultra-wide band (UWB)) and cellular networks, which can be used with different well-known communication protocols, such as iBeacon and Bluetooth, among others. The methods of CIPSs are very diverse; however, some of the most studied methods are based on belief propagation and non-Bayesian approaches such as Least Square (LS) and maximum likelihood [ 39 ].

In contrast to the non-collaborative IPS, the computational architecture is more complex as the position estimate does not only depend on the device data, but also on the data gathered by nearby devices. Usually, the computational architecture of CIPS can be classified into two categories: centralized and decentralized. In a centralized architecture [ 72 , 78 , 82 ], the nodes/actors collect the unprocessed data from the sensors, which are sent to a central node that calculates the position estimate of all nodes. In a decentralized architecture [ 19 , 65 , 72 , 77 , 78 , 82 ], the role of nodes consists of acquiring and sharing (raw or processed) relevant data, but also in processing them in order to, for instance, self-determine their position. In both cases, the final position of a device is collaboratively estimated using the data and information provided by that and other devices.

Figure 1 shows an illustrative example of a CIPS. First, notice the heterogeneity of this indoor positioning scenario, exhibited by all five users who use different approaches to self-estimate their position. As such, for the non-collaborative part: User 1 uses the BLE technology, RSSI technique, and the weighted centroid method; User 2 uses the Magnetic Field-based technology, Magnetic Field Map technique, and likelihood method; User 3 uses the UWB technology, ToA technique, and Multilateration method; and Users 4 and 5 use Wi-Fi technology, the fingerprinting technique, and k -NN method. The blue ellipses under the users represent the estimated position and its uncertainty. For the collaborative part, all users are using Device to Device (D2D) communications based on 5G technology. We present two cases where collaboration improved the results (see red ellipsoids).

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01002-g001.jpg

Representative example of a heterogeneous collaborative indoor positioning system. Source: Authors.

  • Case 1 aims to enhance the position accuracy of User 5, which has large uncertainty. The CIPS applies EKF to integrate the ranging information from Users 2 and 3 to estimate a better position.
  • Case 2 aims to determine the position of User 4, who is not able to self-determine its position as it is far from the Wi-Fi area. The CIPS applies EKF to integrate the ranging information from Users 1–3 to estimate the position even if the non-collaborative part fails.

The main difference between CIPSs and traditional IPSs is that the CIPSs exploit the technologies of the systems both for communication between users and for distance estimation, and the methods consider not only individual information but that of the entire group of collaborators to estimate the position.

3. Research Methodology

This work introduces a systematic review on Collaborative Indoor Positioning Systems (CIPSs) based on the PRISMA guidelines [ 51 ]. Summarized, the review protocol is as follows. First, a set of research questions is formulated to establish the scope of the review. Then, a set of inclusion and exclusion criteria are defined, related to the stated research objectives and boundaries drawn from the research questions, in order to decide the relevance of every considered research article. Next, a rigorous study selection process is carried out, by first defining relevant search queries and running them against scientific digital libraries (Scopus and Web of Science in this work) to identify all potentially relevant studies. Subsequently, the found records are merged, duplicates removed, and screened against the inclusion and exclusion criteria in order to obtain the final set of relevant articles. These articles are then classified, and their features are extracted, mapped, and analyzed.

The research questions and inclusion and exclusion criteria used in this systematic methodology are described in Section 3.1 and Section 3.2 , respectively. The study selection process is fully explained in Section 3.3 , and the classification of studies is presented in Section 3.4 .

3.1. Research Questions

The purpose of this systematic review is to assess and present an overview of research works in CIPSs, as well as present their results. In accordance with those goals, the following set of research questions was formulated:

  • RQ1: What are the infrastructures, architectures, technologies, techniques, and methods (also called algorithms) used in/for CIPSs?
  • RQ2: In which combination are technologies, techniques, and methods used in/for CIPSs?
  • RQ3: How have CIPSs been evaluated, and what are the metrics used?
  • RQ4: What are the limitations, current trends and gaps, and future research avenues that have been reported?

RQ1 specifies the overall goal of our review. Although infrastructures, architectures, technologies, techniques, and methods have been addressed in literature in the context of IPS, we focus here on their use in collaborative systems, and we draw parallels and differences with respect to non-collaborative systems. RQ2 specifically aims to gain an insight in the use of the different technologies, techniques, and methods in conjunction, as these form the core of CIPS. The objective of RQ3 is to present the evaluation metrics used, the type of evaluations performed in CIPS, and their distribution based on the data reported by authors. The goal of the RQ4 is to provide an overview of trends, gaps and limitations, and to provide the research community with avenues for future research in CIPS. Research questions are addressed in the sections indicated in Table 4 .

Research questions and sections.

Research QuestionSystematic Review Section
Research Question 1Results, and
Discussion, and
Research Question 2Results,
Discussion,
Research Question 3Results, and
Discussion,
Research Question 4Discussion,
Conclusion,

3.2. Inclusion and Exclusion Criteria

The studies considered in this review are assessed based on the following inclusion and exclusion criteria.

3.2.1. Inclusion Criteria

  • IC1: Any full, primary research article written in English and published in a peer-reviewed international journal or conference proceedings.
  • IC2: Any article that explicitly presents a Collaborative Indoor Positioning System for human use.

3.2.2. Exclusion Criteria

  • EC1: Any articles that are not full papers (e.g., short papers, demo papers, extended abstracts), or are not primary research (e.g., reviews, surveys), or are not published in a peer-reviewed international conference or journal (e.g., white books, blog posts, workshop papers).
  • EC2: Any articles that do not propose or analyze as main topic at least one CIPS for providing a user’s indoor position (e.g., non-collaborative systems, outdoor systems, algorithms outside the context of a CIPS) or target non-human use (e.g., aerial drones, underwater robotic systems).
  • EC3: Any articles that do not consider the definition of collaboration as the action of joint working between neighboring actors to provide positioning (e.g., sensor fusion, data fusion algorithm, stand-alone device with multi-sensors cooperation).

3.3. Study Selection Process

To select relevant articles, the PRISMA process for study selection was rigorously followed. First, to identify potentially relevant studies with respect to the research questions, an extensive article search was performed using the search engines from two curated scientific digital libraries, namely Scopus and Web of Science, during the identification phase. For each, an equivalent search query was specified, combining multiples keywords using boolean operators, in accordance with required syntax (see Appendix A.1 ). Subsequently, a screening process was carried out, by first removing duplicates, and subsequently screening the title, abstract, and keywords of the remaining articles against the inclusion/exclusion criteria. Finally, in the eligibility phase, the full remaining articles were checked against the eligibility criteria, to obtain a final set of included articles. The study selection process, with step-wise results, is schematically depicted using the PRISMA flow diagram in Figure 2 . As a final set of eligible studies, 84 articles [ 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 63 , 77 , 79 , 80 , 81 , 83 , 84 , 85 , 86 , 87 , 88 , 89 , 90 , 91 , 92 , 93 , 94 , 95 , 96 , 97 , 98 , 99 , 100 , 101 , 102 , 103 , 104 , 105 , 106 , 107 , 108 , 109 , 110 , 111 , 112 , 113 , 114 , 115 , 116 , 117 , 118 , 119 , 120 , 121 , 122 , 123 , 124 , 125 , 126 , 127 , 128 , 129 , 130 , 131 , 132 , 133 , 134 , 135 , 136 , 137 , 138 , 139 , 140 , 141 , 142 , 143 , 144 , 145 , 146 , 147 , 148 , 149 ] were included in our review for full analysis.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01002-g002.jpg

PRISMA flow diagram.

3.4. Classification of the Studies

Our classification scheme is driven by the typical logical breakdown of Collaborative Indoor Positioning Systems: (i) a non-collaborative phase, in which relevant data are acquired and, optionally, positioning is determined by every individual node; (ii) a collaborative phase, in which relevant data are exchanged between nodes, and positioning is determined based on exchanged data; (iii) an overall system, which coordinates and integrates the non-collaborative and collaborative parts of the system. In accordance, and considering the goals and research questions of our review, our classification scheme can be found in Figure 3 and is further elaborated in the next subsections.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01002-g003.jpg

Structure of classification of studies.

3.4.1. Non-Collaborative and Collaborative Phases

The non-collaborative and collaborative phases consider the technologies, techniques, and methods involved in the CIPS.

  • Technologies . This category covers the technologies used to calculate the position on one hand (non-collaborative part) and to provide collaboration between users or nodes on the other hand. In one CIPS, the same or different technologies may be used for either part. Examples of technologies include IMU, Radio-Frequency Identification (RFID), and VLC for the non-collaborative part, and Bluetooth Wi-Fi and UWB for both parts.
  • Techniques . Includes the techniques used for positioning and collaboration between users or nodes. Examples of techniques include fingerprinting, Dead Reckoning (DR), and Time of Arrival/Flight (ToA/ToF) for the non-collaborative part, and position sharing, Two-way Ranging (TWR), and Time Difference of Arrival (TDoA) for the collaborative part. We define techniques as the way certain technologies and derived data are organized and used to achieve positioning.
  • Methods . Includes the algorithms and mathematical methods to compute the positioning and integrate collaboration among users. Examples of methods include Received Signal Strength (RSS)-based, PDR and k -NN for the non-collaborative part, and Particle Filter, Belief Propagation and EKF for the collaborative part. We define methods as a set of logical rules or processes to be followed in calculations in order to determine a positioning estimate.

3.4.2. Overall System

The overall system considers the general features of the system and permits to classify the systems into four dimensions listed below.

  • System Architecture . The System Architecture refers to the type of data processing architecture used in the CIPS, distributed or centralized in this review.
  • System Infrastructure . The hardware deployed in the environment that the CIPS requires to operate such as BLE beacons, RFID tags, fixed cameras, and other ad hoc elements installed in the environment.
  • System Evaluation . This category refers to how the system’s accuracy and performance are evaluated, for example, using numerical simulation, field tests, or both.
  • Main Finding(s) Reported . In this category the main findings reported by the authors of the CIPS are classified. They are related with the evaluation metrics (position accuracy, position precision, system robustness, computational complexity, energy consumption), which are strongly linked with overarching concerns (i.e., concerns not specific to a particular architecture, infrastructure, technology, technique or method, but instead relevant for all systems), limitations of the systems and future research avenues.

The general organization of our classification scheme largely corresponds with those found in literature (discussed in Section 2.2 , Section 2.3 and Section 2.4 ), yet differs in the fact that we do not attempt to group techniques, technologies, or methods. Instead, we exhaustively list all techniques, technologies, and methods encountered in our review in order to categorize papers according to their use of them.

In this section, we present the results of the data analysis performed on the set of 84 articles identified during the article search and selection process, hereby focusing on research questions RQ1–RQ3. Particularly, the results include the distribution of the articles over time, the reported types of evaluation and evaluation metrics, the architectures and infrastructures, and the technologies, techniques, and methods, and the combination between the latter three. Additionally, a table that fully discloses all classification data for all papers in this review is available in Appendix A.2 .

4.1. Evolution of CIPS over Time and Their Evaluation Metrics

The stacked bar graph in Figure 4 a shows the distribution of the 84 articles published regarding CIPSs, together with the reported evaluation metrics. A vertically split bar represents multiple evaluation metrics in a single article. For example, the bar of 2016 shows that all six articles of that year evaluated position accuracy, yet one article combined this with an evaluation of position precision, and two articles combined it with a robustness evaluation. Accumulated results are presented in the embedded pie chart.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01002-g004.jpg

Evolution of the systems over time. ( a ) Distribution of the studies with their evaluation metrics. ( b ) Evolution of the systems’ architecture. ( c ) Evolution of the systems’ infrastructure. ( d ) Evolution of the systems’ evaluation. Source: Authors.

Overall, the number of publications experienced a positive trend throughout the considered time period 2006–2020, as confirmed by the linear trend-line (light gray) calculated using the least-squares method. The first research was found in 2006, and in the first five years we observe an initially low number of publications. From 2010, we see a slow yet steady increase in the number of articles, with growth peaks in 2011, 2015, and 2019 that help to reach an average of approximately seven papers along this period. Additionally, it is clear that the most popular metric evaluation was the position accuracy (represented in cyan), which was present in all articles, mostly as a unique metric (69% accumulated) or reported in combination with others (31% accumulated). Of other metrics, a computational complexity evaluation was most often performed (15.5% + 3.6% in combination with robustness), followed by robustness (6% + 3.6%). Energy and position precision were least represented with 3.6% and 2.4%, respectively.

In the last four years, 46% (39 articles) of all research was done. Overall, no particular temporal trends in evaluation metrics can be discerned, although in the last four years we notice some increasing interest in computational complexity, some combined with robustness as evaluation metrics, but it is too soon (and the numbers are too low) to speak of a trend.

We note that the numbers reported for 2020 should be considered and analyzed with caution: (i) at the time of update (8 January 2021), the research databases may not yet include all the papers published in 2020; (ii) the COVID-19 health situation in 2020 was uncommon, with severe lock-downs worldwide. These restrictions impacted research, as many researchers were unable to attend their workplace for extended time periods and to perform empirical on-site experimentation; (iii) several relevant international venues for positioning and LBS were either canceled or postponed to 2021; (iv) some teams temporarily prioritized research on topics related to the health situation over their usual positioning and LBS work, such as contact-tracing. All these issues reduced the expected outputs (in terms of experiments and publications) for the year 2020; therefore, we cannot consider 2020 as representative in the overall evolution of works.

4.2. Infrastructure and Architecture

Figure 4 b,c presents the results of the analysis towards the use of architectures and infrastructures in the 84 articles analyzed in this systematic review.

As can be observed from Figure 4 b, two main architectures were encountered: decentralized, which is the most prevalent and accounts for 44.05% of articles, and centralized, which accounts for 26.19%. Remarkably, for 23 analyzed papers (27.38%), the architecture was not reported. Only two articles described a combination of the two above-mentioned architectures: one system reported a hybrid architecture that combined the centralized and decentralized approaches [ 100 ] (red in Figure 4 b), and one system proposed an interchangeable architecture that could either operate as a centralized or decentralized system [ 96 ] (green in Figure 4 b).

Figure 4 b shows that from 2006 to 2014 the decentralized architecture was, in general, the most prevalent, with a ratio of 4:1. Subsequently, in the years 2015 to 2017, there was a surge of articles describing the use of a centralized architecture: the ratio between centralized and decentralized was 2.2:1. In the years 2018 and 2019, the distributed architecture presented a sharp increase in the number of articles, whereas the centralized decreased again: the ratio between centralized and decentralized swapped to 1:2.2. The CIPSs combining both were proposed in 2012 and 2013.

From Figure 4 c it is clear that the majority of CIPSs (63.1%) were infrastructure-less (including 14 out of 53 systems based on Signals of Opportunity), versus 28.57% that need infrastructure. A few works, 8.33%, did not report whether or not infrastructure was used. Systems with infrastructure only slowly appeared since 2010 and generally increased in the last years. Nevertheless, for every year, the number of articles published regarding infrastructure-less systems outnumbered the number of articles regarding systems with infrastructure, with the exception of 2016 (equal amount) and 2019, in which we see infrastructure-based systems for the first time overtaking infrastructure-less systems (8 versus 6). It remains to be seen if this trends continues in the next years.

4.3. Non-Collaborative Technologies, Techniques, and Methods

The Sankey diagram in Figure 5 shows the technologies (left), techniques (middle), and methods (right) for non-collaborative positioning estimation based on individually (non-collaboratively) acquired information. Articles were classified and grouped along these dimensions, and the groups were sorted in descending order. Each percentage (between brackets after the technology/technique/method name in Figure 5 ) denotes the number of articles (over the total set) in which a technology, technique, or method was used. Note that the sum of percentages within each dimension may transcend 100%, as the CIPS proposed in an article may use several technologies, techniques, or methods. A horizontal line denotes the combination of a technology with a technique (left in Figure 5 ) and the combination of a technique with a method (right in Figure 5 ) in a CIPS. The color of the lines is determined by the technique, as this best determines the technology and method used. It needs to be noted that some articles did not report the exact method used. Rather than classifying them as “unknown”, we classified these methods more informatively according to the technique they were used in combination with, using the suffix “-based”, i.e., PDR-based, RSSI-based, and Fingerprinting-based.

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01002-g005.jpg

Non-collaborative technologies, techniques, and methods in CIPS. Source: Authors.

Figure 5 gives an overview of the frequency of use of each individual technology, technique, and method, but also on the combination of them. We observe that twelve different technologies have been used, of which Wi-Fi was predominant (53.5%). Inertial Measurement Unit (IMU) (30.9%) and Ultra-wide band (UWB) (15.4%) were also relatively well-studied, while the rest of the technologies received little attention: Bluetooth (2.3%), 5G (2.3%), IEEE.802.15.4a.CSS (2.3%), Long-Term Evolution (LTE) (2.3%), Radio-Frequency Identification (RFID) (2.3%), Visible Light Communication (VLC) (2.3%), Camera (1.1%), Hybrid Sensors (1.1%), and Laser + Compass (1.1%). Nine out of the twelve technologies appeared in just one or two papers. The Wi-Fi technology clusters the technologies Wi-Fi Direct, Wi-Fi (WLAN), Wireless Application Service Provider (WASP), and Wireless Sensor Network (WSN) based on Wi-Fi.

Moreover, we found ten different techniques. The four most representative techniques were Received Signal Strength Indicator (RSSI) (36.9%), Dead Reckoning (DR) (30.9%), Fingerprinting (23.8%), and Time of Arrival/Flight (ToA/ToF) (11.9%). Lesser represented techniques were Time Difference of Arrival (TDoA) (4.7%), Two-way Ranging (TWR) (3.5%), Angle of Arrival (AoA) (2.3%), Hybrid Techniques (1.1%), QR Code (1.1%), and Uplink Time-Difference-of-Arrival (UTDoA) (1.1%). Four out of the ten techniques appeared in just one or two papers.

Finally, sixteen different methods were encountered, with two clearly more studied than others: Pedestrian Dead Reckonings (PDRs) (29.7%) and cooperative methods (23.8%)—which involves those methods that are jointly used in both phases: the non-cooperative and cooperative phases. A second group of four methods was still reasonably well-studied: ranging (14.2%), Received Signal Strength Indicators (RSSIs)-based methods (11.9%), Fingerprinting-based methods (9.5%), and k -Nearest Neighborss ( k -NNs) (9.5%). All other methods were studied less frequently: Multilateration (4.7%), Geometric Ranging (3.5%), Trilateration (2.3%), and with a 1.1% each the methods Analytic, Entropy-based Time of Arrival/Flights (ToA/ToFs), Hybrid Methods, k -Means Clustering + Random Forest, Kullback-Leibler Divergence, Maximum Shared Border, and QR Code Recognition. Almost half of the methods (7 out of 16) appeared in just one paper.

From the plot we can also derive that some works, no more than 14 (16.6%) to be exact, combined multiple positioning solutions in the non-collaborative part, as the sum of technologies, techniques, and methods was slightly higher than 100%. A further manual analysis revealed that eight of them (9.5%) combined IMU and Wi-Fi technologies [ 43 , 63 , 98 , 100 , 102 , 118 , 123 , 147 ]; two of them (2.4%) combined IMU with RFID [ 129 ] and UWB [ 144 ] technologies; one of them (1.1%) combined Wi-Fi and Bluetooth [ 142 ]; and three of them (3.6%) combined two different techniques based on Wi-Fi—AoA+ToA/ToF [ 50 ], RSSI+fingerprinting [ 42 ], and RSSI+ToA/ToF [ 133 ]).

The most interesting part of Figure 5 is the combination in which (non-collaborative) technologies, techniques, and methods were used in CIPS. Several observations stand out from the figure.

Regarding the technologies:

  • The most used technology, Wi-Fi (used in 53.5% of all articles), was in the majority of cases combined with the Received Signal Strength Indicator (RSSI) technique (42% of articles using Wi-Fi technology), also to an equal amount with Fingerprinting (42%).
  • The Inertial Measurement Unit (IMU) technology was exclusively used in combination with the Dead Reckoning (DR) techniques and the Pedestrian Dead Reckoning (PDR) methods (with the exception of a single use of a collaborative algorithm).
  • Wi-Fi and Ultra-wide band (UWB) were the technologies that had been combined with the largest number of techniques (four techniques each). The most common technique for both technologies was Received Signal Strength Indicator (RSSI).

Regarding the techniques:

  • Dead Reckoning (DR) was used in combination with a single technology, Inertial Measurement Unit (IMU).
  • Received Signal Strength Indicator (RSSI) and Time of Arrival/Flight (ToA/ToF), respectively the first and fourth most used technique, were used in combination with the greatest number of different technologies (respectively seven and three technologies). The TDoA, TWR, and AoA techniques were used with two technologies each; all other techniques were used with a single technology (with the exception of the Fingerprinting that was used with Wi-Fi and Bluetooth).
  • RSSI, Fingerprinting, and Time of Arrival/Flight (ToA/ToF) were used with the highest number of methods (six methods each).

Regarding methods:

  • The two most popular methods, PDR-based and Cooperative methods, were used in almost half of the articles (53.5%). Together with the group of four reasonably well-used methods (i.e., Ranging, RSSI-based, Fingerprinting-based methods, and k -NN), they appeared in almost 98% of the reviewed papers. The remaining 10 methods were less common and appeared in less than 20% of papers.
  • The popular Cooperative and Ranging methods (respectively second and third most used) were combined with a variety of techniques. Cooperative methods were used in combination with RSSI (45% inputs), TDoA (20% inputs), ToA/ToF (20% inputs), and with DR, TWR, Fingerprinting in 5% inputs each. Ranging methods were highly coupled with the RSSI technique (67% of inputs to the method), but it was also used with other techniques, namely TWR (17% inputs), ToA/ToF (8% inputs), and UTDoA (8% inputs). In contrast, k -NN exclusively worked with the Fingerprinting technique, which in turn was mainly used in combination with the Wi-Fi technology.
  • Artificial Intelligence (AI) was present in three positioning methods, namely k -NN (in 9.5% of analysed works), and k -Means Clustering + Random Forest and Kullback-Leibler Divergence the (in 1.1% of analyzed works each).
  • Almost half of the methods (7 out of 16) were only used in one article and were evidently each combined with a single technique and method.

4.4. Collaborative Technologies, Techniques, and Methods

The Sankey diagram in Figure 6 shows the technologies, techniques, and methods used for the collaborative part of CIPS, where relevant (sensor) data are acquired and exchanged between actors/nodes, and collaborative positioning is calculated. It is constructed in the same way as for the non-collaborate part (see Section 4.3 ).

An external file that holds a picture, illustration, etc.
Object name is sensors-21-01002-g006.jpg

Collaborative technologies, techniques, and methods in CIPS. Source: Authors.

From Figure 6 , we immediately notice a broader range of combinations considering the different technologies, techniques, and methods compared for the non-collaborative part of Collaborative Indoor Positioning Systems (CIPSs). A dominant technique, RSSI, arises from the diagram, yet it was combined with a multitude of technologies and methods.

We discerned twelve different technologies being used, of which Wi-Fi was used by almost half of the CIPSs (41.6%), followed by Ultra-wide band (UWB) (23.8%) and Bluetooth (19%). All other technologies were used in five or fewer papers each: Acoustic (5.9%), other RF technologies (4.7%), Radio-Frequency Identifications (RFIDs) (3.5%), IEEE.802.15.4a.CSS (2.3%), Long-Term Evolutions (LTEs) (2.3%), VLC (2.3%), 5G (2.3%), Laser+Compass (1.1%), and Magnetic Resonant Sensor (1.1%). Seven out of twelve technologies appeared in just one or two papers.

We encountered nine different techniques used in the collaborative phase, with an overwhelming majority of systems using Received Signal Strength Indicator (RSSI) (72.6%), followed distantly by Time of Arrival/Flight (ToA/ToF) (13%). All other technologies were only sporadically used: Two-way Ranging (TWR) (8.3%), Fingerprinting (3.5%), Positioning Sharing (3.5%), Time Difference of Arrival (TDoA) (3.5%), Angle of Arrival (AoA) (2.3%) Multi-path Components (2.3%), and Uplink Time-Difference-of-Arrival (UTDoA) (1.1%). Six out of nine techniques appeared in three or fewer papers.

With respect to methods, we found a large dispersion of 30 methods, with Particle Filter as the most popular (22.6%), followed by Belief Propagation (10.7%) Extended Kalman Filters (EKFs) (9.5%) and Geometric Algorithms (9.5%). The remaining 27 methods were applied in six or fewer works each: LS (7.1%), Trilateration (5.9%), and Bayesian Filtering (4.7%). Mutidimensional Scaling, Non-Linear Least Squares (NLLSs), Self-organizing Map, and Semidefinite Programming were 3.5% each. The Analytic, Gaussian Weighting Function and Max. Gradient Descendant were 2.3%. With 1.1% each, we had the following sixteen methods: Least Lost Matching Error, Likelihood Function, Coalitional Game, Devaluation Function, Distributed Stochastic Approx., Dynamic Location-convergence, Edge Spring Model, Information Filter, Kalman filter, Max. Likelihood Estimator, Max. Shared Border, Non-parametric Belief Propagation, Probabilistic Density Distribution, Recursive Position Estimation, Simulated Annealing, and Spatial Analysis-based.

The plot also shows that a few works, only nine (10.71%), combined multiple positioning solutions in the collaborative part, as the sum over technologies, techniques, and methods was slightly higher than 110%. A further manual analysis revealed that six papers combined two technologies using RSSI as technique (7.1%), which were Wi-Fi + acoustic [ 118 ], Bluetooth + acoustic [ 131 ], Wi-Fi + UWB [ 146 ], Bluetooth+Wi-Fi [ 42 , 43 , 142 ], and three papers combined two Wi-Fi techniques (3.5%), which were RSSI+ToA/ToF [ 133 ], RSSI + fingerprinting [ 102 ], and AoA+ToA/ToF [ 50 ]. In those nine papers, the technologies or techniques were fused on the collaborative positioning method.

Analyzing the combinations of technologies, techniques, and methods from Figure 6 , we highlight the following results:

  • The most used technology, Wi-Fi (used in 41.6% of all articles) was in the majority of cases combined with the RSSI technique (68% of articles using Wi-Fi technology), yet to a lower extent also with ToA/ToF (20%), Fingerprinting (9%) and minimally with AoA (3%).
  • The top three technologies, Wi-Fi, UWB, and Bluetooth, were all combined with multiple techniques, respectively with four, five, and three. Specifically Wi-Fi with RSSI, ToA/ToF, Fingerprinting, and AoA; UWB with RSSI, ToA/ToF, TWR, TDoA, and Multipath components; Bluetooth with RSSI, ToA/ToF, and Positioning sharing, as can be observed in Figure 6 .
  • The RSSI technique was by far mostly used (72.6%) and was combined with a large variety of technologies and methods. It was mostly combined with the technologies Wi-Fi (40.8%), Bluetooth (22.4%), UWB (14.4%), Acoustic (6.4%), RFID (4.8%), Other RF (3.2%), VLC (3.2%), IEEE.802.15.4a.CSS (1.6%), Magnetic Resonant Sensor (1.6%), and it was combined with 24 of the 30 methods, with Particle Filter being the most used (28%) combination.
  • From virtually every technique, there was a diversity of combined technologies and methods. Only Fingerprinting and Multipath Components were combined with a single technology, respectively Wi-Fi and UWB; all techniques, except UTDoA that appeared in just one paper, were combined with multiple methods.
  • The most used method, Particle Filtering, was used 85% in combination with RSSI, 10% in combination with TWR, and 5% with Fingerprinting.
  • Artificial Intelligence (AI) had a significant presence in collaborative methods, with more than 20 out of 30 methods. The most popular methods were Particle Filter, Belief Propagation, Least Square, and Bayesian Filtering, which were present in a 22.6%, 10.7%, 7.1%, and 4.7% of works, respectively. Other interesting AI methods were Mutidimensional Scaling, Non-Linear Least Squares (NLLSs), Self-organizing Map, and Semidefinite Programming, each present in 3.5% of works, followed by Gaussian Weighting Function and Max. Gradient Descendent with a presence in 2.3% papers each. The least common collaborative AI methods appeared in just one paper each and included Likelihood Function, Coalitional Game, Devaluation Function, Distributed Stochastic Approx., Information Filter, Max. Likelihood Estimator, Max. Shared Border, Non-parametric Belief Propagation, Probabilistic Density Distribution, Recursive Position Estimation, and Simulated Annealing.
  • A majority of methods were only used once (16 of 30) or twice (6 of 30). Evidently, methods that were used once combined with a single technique and method.

4.5. Evaluation of Systems

Understanding how the systems were evaluated allows us to better interpret the significance of the results and qualify any comparison of reported results. Figure 4 d presents the type of evaluation performed on the reported CIPSs. We discern experimental, simulated, both, or evaluation type not specified. Overall, the embedded pie chart illustrates a similar number of systems being simulated (45.24%) and experimentally (41.67%) evaluated, and a minority of systems both experimentally and simulated (8.33%) and not specified (4.76%). The bar chart reveals, over the years, a overall dominance of simulated evaluations until 2017. In the last three years (2018–2020) however, there was a large increase in experimental evaluations, combined with a relative drop in simulated evaluations, which caused the experimental to overtake the simulated evaluation (ratio roughly 2:1). Combined evaluations (i.e., both experimental and simulated) were only sporadically present over the time period 2006–2020.

5. Discussion

In this section, we further analyze and discuss the set of articles in light of the quantitative results presented in Section 4 , trying to uncover the underlying reasons for the findings. We hereby deepen the answers to research questions RQ1–RQ3. Finally, based on this deeper analysis, we address research question RQ4 and point out limitations, gaps, and future research avenues.

5.1. Architectures and Infrastructure of Collaborative Indoor Position Systems

Regarding the architectures for CIPSs, centralized architectures are less used than decentralized ones. Articles reporting on centralized CIPSs often outlay implementation and deployment hurdles, which may deter their further use. Those problems include high complexity of the algorithms required to solve the positioning problem in a cooperative way [ 77 , 103 , 126 , 129 ], communication bottlenecks and delays because massive data exchange between nodes and centralized server [ 100 , 103 ], scalability in terms of the computational burden and concurrent users [ 95 , 100 , 129 ], and lack of robustness against failure [ 129 ]. In contrast, decentralized architectures are designed to share the computational processing among all the collaborative devices. Each node or actor pre-processes the collected data (for instance, calculating their position) and then broadcasts relevant information to other users. This procedure reduces the amount of transmitted raw data and alleviates the computation on the central node. The computational complexity metrics are predominantly performed for decentralized architectures in order to demonstrate their superiority in terms of computational optimization [ 77 , 90 , 109 , 131 , 132 , 137 ], whereas other performance metrics, such as the accuracy, are relegated to a second plane, as they depend primarily on technology, technique, and method used. In other words, decentralized systems offer the same positioning accuracy with fewer computational problems. This explains why a majority of systems preferred a decentralized architecture (44.05%) over a centralized one, especially in the last 4 years.

The choice between infrastructure and infrastructure-less approaches is related to the type of sensing technology in use. A majority of research focuses on positioning scenarios in existing buildings, some of them re-using already existing infrastructure (i.e., signals of opportunity). Typical scenarios that are receiving a growing interest involve locating people within houses, offices [ 63 , 115 , 123 , 131 ], and universities [ 40 , 43 , 63 , 79 , 98 , 101 , 119 , 120 , 128 , 131 ], where the viability of installing complex infrastructure is low in terms of costs, unlike industrial or warehouse scenarios, which are capable of developing and deploying robust and costly infrastructures designed for positioning. As a result, our review indeed shows that infrastructure-less approaches are predominantly selected for CIPSs in research (see Figure 4 c), and that the majority of technologies used (see Figure 5 and Figure 6 ) are already present in the environment. They are either reused (i.e., Wi-Fi) or do not require any deployment (i.e., IMU, laser, compass).

5.2. Technologies, Techniques, and Methods in Collaborative Indoor Positioning Systems

5.2.1. analysis on the non-collaborative part.

The results provided in Section 4.3 showed that in 66 articles, positioning was done in two steps: each node or actor used an indoor positioning method (based on one or multiple positioning technologies) to get an initial estimation with only its collected data [ 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 63 , 77 , 79 , 80 , 81 , 83 , 84 , 86 , 87 , 89 , 91 , 92 , 93 , 94 , 97 , 98 , 100 , 101 , 102 , 103 , 104 , 106 , 107 , 108 , 109 , 110 , 111 , 113 , 114 , 116 , 117 , 118 , 119 , 120 , 123 , 124 , 125 , 127 , 128 , 129 , 130 , 131 , 132 , 134 , 135 , 136 , 137 , 141 , 143 , 144 , 145 , 146 , 147 , 150 ]. The estimated position or the raw data was later used in the collaborative part to enable or improve other user’s positioning. The remaining 18 articles were fully collaborative, as they relayed the position estimation of the nodes and/or actors completely to a collaborative method (i.e., denoted as “Cooperative methods” in Figure 5 ). In other words, stand-alone positioning was not performed in these systems, and the collected raw data were directly processed using the corresponding proposed collaborative method [ 85 , 88 , 90 , 95 , 96 , 99 , 105 , 112 , 115 , 121 , 122 , 126 , 133 , 138 , 139 , 140 , 142 , 148 ]. Therefore, the cooperative methods are excluded from the discussion in this section devoted to the non-collaborative part; for full details on the collaborative systems, please see Section 5.2.2 . We also note that in two of these articles [ 133 , 142 ], authors proposed two different CIPSs.

Regarding stand-alone positioning in the non-collaborative phase, the most used methods were PDR [ 43 , 63 , 92 , 97 , 98 , 100 , 101 , 102 , 106 , 107 , 110 , 117 , 118 , 120 , 123 , 127 , 128 , 129 , 131 , 141 , 144 , 145 , 146 , 147 ], Ranging [ 42 , 46 , 47 , 81 , 93 , 109 , 111 , 113 , 130 , 132 , 144 , 150 ], RSS-based [ 40 , 41 , 43 , 48 , 77 , 84 , 89 , 103 , 123 , 129 ], k -NN [ 63 , 79 , 108 , 114 , 118 , 119 , 134 , 136 ], fingerprint-based [ 42 , 83 , 98 , 100 , 102 , 143 , 147 , 149 ], and multilateration [ 86 , 94 , 104 , 137 ] methods. Those methods were highly coupled to two main positioning techniques: RSSI (including fingerprinting techniques [ 108 , 110 , 114 , 116 , 134 ]) and DR, which in turn rely on communications technologies (mainly Wi-Fi) and inertial sensors respectively [ 40 , 41 , 42 , 43 , 45 , 48 , 50 , 63 , 79 , 81 , 83 , 84 , 85 , 87 , 88 , 89 , 90 , 93 , 95 , 98 , 100 , 102 , 103 , 104 , 105 , 108 , 109 , 112 , 114 , 116 , 118 , 119 , 122 , 123 , 129 , 130 , 133 , 134 , 136 ]. The major drawbacks of Wi-Fi and IMU as base positioning technologies are widely known. On the one hand, the positioning error provided by Wi-Fi-based positioning is around a few meters, whereas IMU-based solutions might suffer from accumulated drift errors. On the other hand, both can be considered infrastructure-less solutions. In the case of Wi-Fi, the already available network infrastructure for communications can be used for more-or-less accurate positioning (around a few meters) with no additional cost. As expected, most of collaborative works try to exploit widely used low-cost and simple IPSs with known issues, in order to improve them by means of collaboration. In fact, only a few CIPSs combined two or more positioning technologies in the non-collaborative [ 43 , 63 , 98 , 100 , 102 , 118 , 123 , 129 ] and in the collaborative [ 42 , 43 , 118 , 131 ] parts, although it is common in regular IPS. Just in one paper [ 43 ], multiple technologies—Wi-Fi+IMU in the non-collaborative part and Wi-Fi+Bluetooth in the collaborative part—were combined in both parts.

The positioning technologies that require large infrastructure, accurate calibration of the anchors, and provide high-accurate positioning have lower presence in the non-collaborative phase. The possible causes are (1) the less-likely integration of these technologies in wearable or human-tracking devices (e.g., smartphones have Wi-Fi support and inertial sensors, but only a few models support UWB); (2) the deployment costs might make it more attractive to explore infrastructure-less or less expensive solutions; (3) there is no need for collaboration as the deployed infrastructure offers full coverage of the operational area, and the positioning technology is accurate enough [ 19 ].

Another remarkable finding in the non-collaborative part is the lack of details of some key aspects of the CIPSs. Around a third of the reviewed papers did not provide enough details about the method used to provide the position estimate, only being cataloged as fingerprint-based [ 42 , 83 , 98 , 100 , 102 , 143 , 147 , 149 ], ranging [ 42 , 46 , 47 , 81 , 93 , 109 , 111 , 113 , 130 , 132 , 144 , 150 ] and RSS-based [ 40 , 41 , 43 , 48 , 77 , 84 , 89 , 103 , 123 , 129 ] methods. In those works, the authors considered the user’s positioning method in the non-collaborative part irrelevant, i.e., the main focus of the CIPS was to improve the user’s position in the collaborative part, regardless of the approach used in the non-collaborative part.

Table 5 presents a summary of the advantages and disadvantages of the five most popular non-collaborative methods, as well as their computational performance, positioning accuracy, and their implementation, among others. Regarding the PDR-based methods, although they provide a reasonable estimate of the trajectory, they accumulate positioning error over time. The ranging methods are negatively affected byNLOS conditions; however, in LOS conditions, they provide a good performance and distance estimation. RSSI-based methods are straightforward, and their positioning estimation accuracy relies on the quality of Radio Frequency (RF) signal strength measurements. The fingerprint-based methods need a good-quality radio map, i.e., a set of previous collected data/samples, to operate. Although k -NN is a fingerprint-based method with high accuracy and easy implementation, its computational complexity in the operational phase increases as the number of reference samples and APs increases.

Summary of advantages and disadvantages of the most used non-collaborative and collaborative methods. The top section presents the five most used non-collaborative methods. The bottom section presents the six most used collaborative methods.

MethodAdvantagesDisadvantages
Non-Collaborative PDR-based
Ranging
RSSI-based
Fingerprint-based
-NN
CollaborativeP. Filter
Belief Propagation
EKF
Geom. Algorithm
LS
Trilateration

5.2.2. Analysis on Collaborative Part

The results on the collaborative part shows that the techniques based on RSSI were predominant, which in turn mainly used diverse communication technologies (i.e., radio frequency, sound and light). The RSSI was commonly used to estimate the distance between the emitter and receiver. In general, the term RSSI has been used as a synonym of ranging positioning technique [ 151 , 152 , 153 ]. Less frequent but still relevant techniques are based on ToA/ToF or Two-way Ranging (TWR), which are in most of cases also coupled to the communications technologies (Wi-Fi, Bluetooth, and UWB). It is important to mention that good results of estimating the position and performance in collaborative systems require a good interplay between technologies, techniques, and methods, so that their advantages excel and disadvantages are compensated. Considering the accuracy and precision positioning, the best CIPS are based on VLC [ 135 ] and UWB [ 115 ], which are technologies that already provide high accuracy in conventional positioning systems.

In the collaborative part, the CIPSs are mainly using methods based on RSSI (used as a synonym of ranging or RSSI ranging) to calculate the relative distance between the involved users and actors. However, in around half of the works, the authors proposed a specific method—not widely used by other researchers—for positioning in the collaborative part. Regarding those methods, we observed that they were proposed for different purposes. Particle filter, Gaussian Weight Function, and Multidimensional Scaling were implemented to enhance robustness in CIPSs [ 50 , 98 , 116 , 120 ]. Belief propagation is mainly used because of its potential to achieve high accuracy in collaborative position estimation, but at the expense of high computational complexity. Correspondingly, the Collaborative Indoor Positioning Systems (CIPSs) that use Belief propagation tend to balance the trade-offs between the computational complexity and the positioning accuracy [ 46 , 48 , 90 , 99 , 100 , 109 , 132 , 137 ]. Trilateration and Geometric algorithms have been mainly used in those CIPSs that attempt to improve the energy consumption [ 42 , 43 ]. The methods EKF and LS, in combination with the UWB technology, were used in [ 49 , 115 ] to improve the position precision.

As with the non-collaborative part, the most used collaborative methods present advantages and drawbacks. The six most used methods were Particle Filter [ 83 , 89 , 93 , 98 , 102 , 117 , 118 , 120 , 123 , 126 , 129 , 136 , 146 , 147 , 149 , 149 ]; Belief Propagation [ 46 , 47 , 96 , 109 , 130 , 131 , 132 , 135 ]; EKF [ 106 , 107 , 115 , 125 , 140 , 141 , 150 ]; Geometric Algorithm [ 43 , 124 , 127 , 134 , 142 , 143 ]; LS [ 45 , 49 , 133 , 137 , 148 ]; Trilateration [ 40 , 41 , 42 , 94 ]. One of the main advantages of the methods based on Particle Filter is their capability of handling non-Gaussian and non-linear estimations; however, their computational complexity increases (increment of the number of particles) as the position accuracy increases. Methods based on Belief Propagation exhibit high reliability and versatility to be used with different statistical models, yet they also incur a high computational cost. On the contrary, the EKF, Geometric Algorithms, LS, and Trilateration have as an advantage a low computational complexity. Although the EKF works with Non-linear models, it is only designed for Gaussian noise conditions. In Geometric Algorithm, the positioning accuracy is highly dependent on the location of the nodes. In case of a bad distribution of nodes, the positioning accuracy is negatively affected. Regarding the LS, one of its inconveniences is that it can only be applied to linear modes. The Trilateration method can only be applied if there are three or more non-collinear points, and its performance is extremely poor in NLOS conditions. Table 5 provides a summary of the advantages and disadvantages described above.

5.2.3. Overarching Concerns

Sensor fusion is not usual in CIPS, neither in the non-collaborative nor in the collaborative parts, despite state-of-the-art IPSs combining multiple technologies to enhance their accuracy, robustness, and/or precision [ 154 , 155 , 156 ]. Only [ 43 , 63 , 98 , 102 , 118 , 123 , 129 ] applied sensor fusion in the non-collaborative part and [ 121 ] in the collaborative part. Similarly, none of the works considered a scenario where different non-collaborative positioning solutions (with different technologies, techniques, and/or methods) co-exist as shown in the exemplary scenario in Figure 1 . In general, each CIPS has introduced a collaborative system that was built on top of a controlled and simple approach in the non-collaborative part. However, there are many alternatives to track and localize users in different environments. We consider the device—and therefore sensor—diversity should not be omitted, as real-world scenarios will encounter various heterogeneous data sources, and corresponding applications should be encouraged to consume data from as many sources as possible.

In addition to the aforementioned improvements, CIPSs present other advantages over the conventional IPSs approaches. Some Collaborative Indoor Positioning Systems (CIPSs) have extended the coverage without deploying additional expensive and/or complex infrastructure by using the users as auxiliary nodes [ 40 , 41 , 42 ]. Other Collaborative Indoor Positioning Systems (CIPSs) have reduced the positioning ambiguities—and therefore the positioning error—in harsh environments with NLOS by processing the absolute (non-collaborative) and relative (between users) positions with Belief propagation [ 46 , 47 , 135 ]. It seems that the indirect LOS provided by the users plays a key role to improve positioning.

Regarding the communication protocol and synchronization of the devices, the vast majority of the articles analyzed (90.5%) did not cover these aspects, as they mainly focused on demonstrating the effectiveness of the collaborative system rather than addressing practical problems arising in a real-world setting. The communication protocols identified in this review were User Datagram Protocol (UDP) [ 63 ] and Collection tree protocol (CTP) [ 94 ]. Several articles mentioned D2D communication, without specifying the exact protocol used (e.g., [ 79 ]). In order to address the synchronization problem between devices, TWR was the most used [ 47 , 79 , 126 , 128 , 129 ]. Alternatively, the Hop-synchronization with GNSS time was used as an accurate method to measure time of flight between Bluetooth nodes [ 45 ].

Rather surprisingly, some relevant overarching concerns were hardly or not addressed at all. For example, energy consumption was only considered in three articles, even though energy drain due to collaboration (when the positioning system runs as a background process), rather than for calculating own position, is highly relevant and may defer users from using a CIPS. Ref. [ 145 ] proposes an algorithm to save energy consumption reducing the re-broadcasting messages among users. Ref. [ 43 ] uses a decentralized architecture, measured the energy consuming of the main components of their CIPS, and they found the operating system (30%), Wi-Fi (20%), and Bluetooth module (14%) to be consuming the most. Both [ 42 ], using a centralized architecture, and [ 43 ] furthermore reached the same conclusion: scanning for devices (e.g., using bluetooth, BLE) or wireless AP (e.g., for Wi-Fi) is a critical energy-consuming component of a CIPS. Attempts to reduce the energy consumption, for example by interchanging continuous scanning by intermittent scanning, reduced position accuracy [ 43 ].

Moreover, even though privacy and security are addressed in traditional indoor positioning systems, these concerns were not discussed in CIPS literature—where focus is primarily on proof-of-concept to show improved accuracy. Nevertheless, CIPS are particularly vulnerable, as careless data exchange during the collaboration process (e.g., unencrypted communication, broadcasting raw sensor measurement, or calculated position estimates) may leave the user prone to third-party breaches and leak his/her position. Regarding privacy and security in non-collaborative indoor positioning systems-relevant for the non-collaborative part of CIPSs, we mention for example [ 157 ], which present a malicious check-in defense scheme based on the AP selection and Big data analysis [ 158 ], which introduces a practical privacy-preserving indoor localization using outsourcing scheme and a security analysis [ 159 , 160 ], which discuss solutions based on k -NN + Paillier cryptosystem, Support Vector Machine (SVM) and k-anonymity, among others.

5.3. Evaluation of Collaborative Indoor Position Systems

One of the main pillars of CIPSs evaluation is how the experiments are designed and planned. Even though experimental evaluations are preferred, and there was a large increase in experimental evaluation during 2018, 2019, and 2020 (overtaking the amount of simulation-based evaluation), overall, we still observe a slightly higher number of simulation-based evaluations (i.e., 45.24% simulations versus 41.67% empirical experiments). Experimental evaluations best mimic circumstances and operational conditions of a real-life scenarios [ 43 , 79 ], yet they are more difficult to set up and perform [ 63 , 129 ], time-consuming [ 44 , 117 , 131 ], prone to various types of failure and errors [ 43 , 129 ], are and (potentially) costly [ 115 , 125 , 129 ]. Additional difficulties of experimental evaluations are the difficult-to-control practical issues faced in real-world scenarios, for example, the loss of precision due to signal attenuation/interference [ 43 , 121 ] and NLOS conditions [ 43 ].

Simulations provide a controlled environment, where both data and collaborative algorithms can be simulated. This eliminates hardware failures and allows researchers to easily perform different runs of an experiment with different configurations. For example, the number of users [ 130 ], quantity and density of the reference points [ 46 , 130 ], (simulated) hardware configuration in the environment [ 99 , 126 , 133 ], and even the environment itself [ 46 , 77 ] can be easily modified in different runs of the experiment. Balancing the two, a minority of articles (8.33%) presented a mixed evaluation, whereby the experimental part corresponded to simplified tests in real environments to validate the system, and simulations were used to test it in more complex environments [ 63 , 100 , 118 ].

Regarding the metrics, all the reviewed works included the positioning accuracy, as we can observe in Figure 4 d. Going deeper in this particular evaluation metric, we observe that different measures have been provided which, in descending order, are as follows: the Cumulative distribution Function (CDF) of the positioning error [ 43 , 44 , 79 , 80 , 108 , 112 , 113 , 116 , 118 , 119 , 126 , 129 , 131 , 134 , 150 ], the Root Mean Square Error (RMSE) [ 48 , 49 , 77 , 109 , 121 , 122 , 126 , 135 ], the standard deviation of the error [ 40 , 41 , 42 , 63 , 80 , 110 ], the minimum mean square error [ 46 , 117 , 129 , 130 , 133 ], and finally the average positioning error [ 105 , 123 , 132 , 134 ]. This makes comparison of accuracy unfeasible.

Computational complexity evaluates the performance of a system considering the following aspects: the workload required to estimate the position collaboratively [ 46 , 77 , 122 , 130 ], the communication overhead [ 109 ], and the execution time to solve the positioning problem [ 46 , 132 ]. Some of the suggested approaches to reduce the computational complexity are (i) apply the collaborative positioning algorithm to a restricted set of users to reduce execution time [ 46 , 132 ]; (ii) formulate the problem of collaborative positioning as a quasi-convex feasibility problem to deal with the complexity of the non-convex structure models, which permits to reduce the computational load [ 77 ]; (iii) use a parametric belief propagation scheme and an analytical approximation to compute peer-to-peer messages in order to reduce the communication and computational cost [ 109 , 130 ].

The robustness determines how invariable a positioning system is under variations on the input data or execution failures. Just a few CIPSs were proposed with the main aim of increasing their robustness [ 47 , 50 , 98 , 116 , 120 ]. The algorithms used to measure robustness are against the ranging error, limited number of online samples or peer users, outdated fingerprinting database, and node failure. Some strategies to provide robustness are (i) to use a sum product algorithm over wireless networks to provide robustness again nodes failure [ 47 ]; (ii) to use a Gaussian neighborhood weighting method to eliminating multiple-bounce reflection paths [ 50 ]; (iii) to use a Multidimensional scaling and Procrustes analysis to exhibit robust performance in cases with a limited number of online samples or peer users, large ranging errors, and fluctuated RSS readings [ 116 ].

The two least represented evaluation metrics, energy consumption [ 42 , 43 , 144 ] (discussed in Section 5.2.3 ) and position precision [ 49 , 115 ] (both based on UWB technology), have only recently been considered.

5.4. Recommendations, Gaps, and Limitations

On the basis of the results and analysis performed in this systematic review, i.e., the current state of the art, we present the following recommendations for researchers regarding the development of collaborative indoor position systems for positioning of humans:

  • Architecture: A decentralized architecture is the most suitable option for a collaborative approach since it avoids communication bottlenecks, delays in response times, and dependence on a server. However, computing algorithms on (restricted) user devices limits the implementation of complex algorithms and, due to device variability, its performance might not be homogeneous for all users.
  • Infrastructure: A CIPS based on Infrastructure-less approach or based on signals of opportunity might be preferable, due to the continuous mobility of users in different environments, and the cost of developing an infrastructure to provide coverage of the operational area. In addition, an Infrastructure-less approach provides versatility to the system in order to be used in a larger number of scenarios. However, the lack of an ad hoc infrastructure for the CIPS implies a challenge in its design in order to compensate for the inaccurate positioning that the uncontrolled environments provide. Only for specific real-world scenarios, an infrastructure-based approach may be preferable.
  • Technologies: Despite the great accuracy and precision positioning provided by some technologies (mainly VLC, UWB, and 5G), Wi-Fi and BLE might currently be better suited, as other relevant factors are the ubiquity of the technologies used, the low implementation costs, and the low energy consumption that Wi-Fi and BLE offer. An evolution in general availability and supporting hardware, e.g., particularly in the case of 5G, may cause a shift in preferred technology.
  • Techniques: From the point of view of positioning accuracy and considering Wi-Fi as main positioning technology, Wi-Fi Fingerprinting is widely used because the position of the anchors (APs) is not needed. However, the techniques based on RSSI perform better as the geometry and distribution of the APs are well known. Further investigation of the supporting infrastructure—e.g., estimating the APs by manual inspection or automatic detection [ 161 , 162 ]—might allow the replacement of fingerprint-based with more accurate RSSI-based methods.
  • Methods: Due to the diversity of scenarios and conditions in which the systems have been tested, it is difficult to specify which method is the most appropriate. We consider that different alternative methods should be compared in different dimensions—mainly accuracy, precision, robustness, and computational cost—when a new CIPS is proposed, and the final proposed one should be selected according to some pre-defined criteria (e.g., best positioning error, lowest execution time, or a trade-off between the two).

The above recommendations may serve as a guide to follow when designing further CIPS. However, as the wide variety of solutions reported in the reviewed papers shows, the decisions on each part must be tailored to the specific needs of each system.

The analysis of the reviewed papers also highlighted some restrictions and/or limitations, which can be considered gaps in current research and provide future research opportunities.

  • The proposed CIPSs tend to focus on excelling in one relevant characteristic, mainly the deployment costs, the computational complexity, the real-time operation, energy consumption, or the positioning accuracy. The main limitation of current CIPSs is that none of them try to balance all these aspects, specially in complex environments.
  • In general, the CIPS select a single technology for the non-collaborative part and a single technology for the collaborative part. Generally, the reviewed CIPS neither exploit sensor fusion nor multiple positioning alternatives. We consider that technology diversity in both parts might make the CIPS more robust, as it has been demonstrated in conventional IPS.
  • None of the reviewed works = considered the privacy of the users nor the security of the CIPSs. Privacy is a main overarching concern that has already been regulated in many countries (e.g., the European General Data Protection Regulation (GDPR) [ 163 ]). The vast majority of positioning solutions (in the non-collaborative and collaborative phase) rely on communication technologies that can be attacked (i.e., jamming or spoofing mainly) to alter the outputs of the positioning system and/or the sensing data processed by the user, which might be considered a security breach of the CIPS. Energy consumption is also a relevant overarching concern, which may deter users from using a CIPS, and this area is insufficiently studied.
  • The evaluations of the CIPSs are tightly coupled to the technology used in the non-collaborative part. The community needs an evaluation framework able to objectively evaluate the collaborative part of the CIPS with independence to the positioning technology used in the non-collaborative part. An important part of such a framework is comparable evaluation metrics. Moreover, evaluation considering multiple technologies working simultaneously has not been widely explored yet.
  • Evaluation is done over simulations in almost half of reviewed works because it does not require deploying expensive hardware and manual labor. Although some simulated environments are able to mimic the real world, a comprehensive empirical evaluation is needed to demonstrate the feasibility of the proposed CIPSs in realistic conditions. A repository of extensive multi-sensor and multi-user datasets for that purpose could enhance research reproducibility, enable the fair comparison of CIPSs, reduce evaluation costs (assuming the datasets are publicly available), and be an incentive to further research CIPSs.

6. Conclusions

This article presented a systematic review on CIPSs for humans. After a well-defined search phase, 84 relevant articles were identified in the time frame 2006–2020, which were subsequently classified along the following dimensions: architecture, infrastructure, technologies, techniques, methods, and evaluation metrics. The performed analysis demonstrated the growing interest within the scientific community for the study of the CIPSs, with an overall increasing number of articles over the years.

Our study shows a predominant use of a decentralized architecture, with an increase especially in the last 3 years. Cited disadvantages for centralized architectures include computational complexity, communication bottlenecks, scalability, and lack of robustness against failure. Regarding infrastructure, our study revealed a large dominance of infrastructure-less systems, which seem to be related to practical issues rather than technical ones, including use of already available hardware and an overall lower cost, which are more suitable for common scenarios. This does not take away the possibility of an infrastructure-based solution, as such solutions, while requiring more effort and being more expensive, have the potential to yield more accurate results than the infrastructure-less ones.

Regarding technologies, techniques, and methods, we separately analyzed the collaborative and non-collaborative parts of CIPSs. With respect to the non-collaborative part, in which relevant data are acquired and (optionally) positioning is determined by every individual node, the results show a wide diversity of technologies, techniques, and methods, making it difficult to declare a winning combination. Wi-Fi/RSSI, Wi-Fi/fingerprinting, and IMU/Dead Reckoning (DR) are widely used and are among the preferred combinations of technology and techniques in the literature for the non-collaborative part. They are recommended in scenarios where deployment costs need to be low (they are infrastructure-less or depend on signals-of-opportunity), and the system needs to work/be implemented on smartphones. We consider that fingerprint-based methods could be improved with additional knowledge of the environment, replacing them with more accurate RSSI-based methods. In contrast, for the collaborative part, in which relevant data are exchanged between nodes and positioning is determined based on exchanged data, RSSI based on Wi-Fi and Bluetooth technologies are popular among researchers due to their ubiquity, total infrastructure-less nature, low energy usage, and low cost. Regarding the method, none of them stood out, as they have different objectives. For instance, the Belief propagation provides high accuracy at the expense of high computational costs.

Some CIPS are fully collaborative, and the non-collaborative part just gathers data, which are later processed by a collaborative method. Our recommendations are for decentralized systems where the non-collaborative and collaborative parts can work independently to provide positioning to the users. In the centralized collaborative methods, if the central node fails, the position estimation cannot be estimated for any user.

So far, most evaluations of CIPSs in the literature have relied on simulations. However, in recent years there has been a growing interest in experimental evaluations, so in the short term we may be looking at a turnaround. Empirical/experimental evaluation better mimics complex real-world conditions, and the obtained results are more relevant for the community than the simulation-based results. However, the relevant results are obtained at the expense of manual labor and, sometimes, expensive hardware deployments. We consider that creating a repository of heterogeneous datasets, as done in regular IPSs, is necessary for multi-scenario cost-less evaluation.

CIPSs have demonstrated several benefits over non-collaborative IPSs. They may expand the coverage area of indoor location systems, e.g., by providing positioning to users located in areas uncovered by infrastructure. In addition, they reduce the positioning error by including the information of other users within the algorithms and by adding more reference points to be used to compute the position of a group of users. However, the CIPSs have disadvantages such as the computing time, the large computational processes of all nodes, and energy consumption. On this basis, one of the most important trade-offs and gaps found in literature is related to balancing the positioning accuracy, the real-time restrictions, and computational complexity of the method with respect to improving the energy efficiency.

We consider that there is still ample opportunity for improvement and further research in the area of collaborative positioning. As most promising future avenues, we see exploiting sensor fusion at the non-collaborative and collaborative parts; considering device and technology diversity in the CIPS architecture; enhancing the security and privacy of the positioning systems and LBS; and defining a more comprehensive evaluation setup that considers multiple realistic scenarios (either through empirical experiments and/or open-available datasets).

Abbreviations

The following abbreviations are used in this manuscript:

Ambient Assisted Living
Artificial Intelligence
Angle of Arrival
Access Point
Bluetooth Low Energy
Cumulative distribution Function
Collaborative Indoor Positioning System
Channel State Information
Collection tree protocol
Device to Device
Dead Reckoning
Extended Kalman Filter
Frequency Modulation
Globalnaya Navigazionnaya Sputnikovaya Sistema
General Data Protection Regulation
Global Navigation Satellite System
Global Positioning System
Inertial Measurement Unit
Internet of Things
Indoor Position System
-NN -Nearest Neighbors
location-based service
Line-of-sight
Least Square
Long-Term Evolution
Microelectro-Mechanical System
Non-Linear Least Square
Non-line-of-sight
Pedestrian Dead Reckoning
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
Radio Frequency
Radio-Frequency Identification
Root Mean Square Error
Received Signal Strength
Received Signal Strength Indicator
Support Vector Machine
Time Difference of Arrival
Time of Arrival
Time of Arrival/Flight
Two-way Ranging
User Datagram Protocol
Uplink Time-Difference-of-Arrival
Ultra-wide band
Visible Light Communication
Wireless Application Service Provider
IEEE 802.11Wireless LAN
Wireless Sensor Network

Appendix A.1. Search Queries

Table A1 presents the search queries used to retrieve information from the database of Scopus and Web of Science on 8 January 2021. The search encompassed the period from 2006 until 2020.

Scopus and Web of Science search queries.

DatabaseInput QueryNo. Articles
Scopus(TITLE-ABS-KEY
(((Collabora* OR Coopera*)
AND Indoor) AND (Position*
OR Track* OR Locati* OR
Locali* OR Navigat*)) AND
LANGUAGE (english))
1404
Web of ScienceTS=((Collabora* OR Coopera*)
AND Indoor AND (Position*
OR Track* OR Locati* OR
Locali* OR Navigat*))
1425

Appendix A.2. Articles Included in the Systematic Review

Table A2 fully discloses all classification data for all papers included in this review.

Information of the 84 articles included in the systematic review grouped by publication year. It includes the technology, technique, and method for the non-collaborative and collaborative parts. Arch stands for system architecture, which can be centralized (C), decentralized (D), centralized and decentralized (C&D), centralized or decentralized (C/D), or not specified (N/S). Infr stands for system infrastructure, which can be with infrastructure (W/I), infrastructure-less (I-L), or not specified (N/S). Eval stands for system evaluation and can be simulated (S), experimental (E), simulated+experimental (S+E), or not specified (N/S). Eval Metric stands for evaluation metrics and can be position accuracy (PA), position accuracy+robustness (PA+R), position accuracy+computational complexity (PA+CC), position accuracy+energy (PA+E), position accuracy+computational complexity+robustness (PA+CC+R), or position accuracy+position precision (PA+PP).

YearRef.TechnologyTechniqueMethodArch.Infr.Eval.Eval. Metric
Non-CollaborativeCollaborativeNon-CollaborativeCollaborativeNon-CollaborativeCollaborative
2020[ ]BluetoothBluetoothF. printingRSSIF. printing-B.Geom. AlgorithmDI-LEPA
[ ]IMU, UWBUWBDR, RSSIRSSIPDR-B. M., RangingBayessian F.DW/ISPA+E
[ ]IMUBluetoothDRRSSIPDR-B. M.EKFDI-LEPA
[ ]IMUWi-Fi, UWBDRRSSIPDR-B. M.P. FilterDI-LS+EPA+CC
[ ]IMU, Wi-FiUWBDR, F. printingTWRPDR-B. M., F. printing-BP.FilterDI-LEPA
[ ]5G5GRSSIRSSICoop. AlgorithmLSN/SW/ISPA
[ ]Wi-FiBluetoothF. printingRSSIF. printing-BP. FilterCI-LEPA
2019[ ]IMUUWBDRTWRPDR-B. M.EKFDW/IEPA
[ ]UWBUWBTWRTWRCoop. AlgorithmEKFCW/IEPA
[ ]UWBUWBTDoATDoACoop. AlgorithmBayesian F.N/SW/IEPA
[ ]UWBUWBTDoATDoACoop. AlgorithmAnalyticN/SW/IEPA
[ ]5G5GAoAAoAMultilaterationLSDI-LSPA+CC
[ ]UWBUWBToA/ToFToA/ToFEntropy-based ToALSDI-LSPA+PP
[ ]Wi-FiWi-FiRSSIRSSIRangingMultidimensional ScalingN/SI-LSPA+CC+R
[ ]VLCVLCRSSIRSSIRSSI-B. M.Max. Likelihood E.DW/ISPA+CC
[ ]RFIDRFIDRSSIRSSIAnlyticMultidimensional ScalingCW/IEPA
[ ]VLCVLCRSSIRSSITrilaterationB. PropagationDW/IEPA
[ ]Wi-FiWi-FiF. printingToA/ToFKNNAnalyticDI-LEPA
[ ]Wi-FiWi-FiF. printingRSSIKNNGeom. AlgorithmN/SI-LEPA
[ ]UWBUWBRSSIRSSIRangingB. PropagationDI-LSPA+CC
[ ]Wi-Fi, BluetoothWi-Fi, BluetoothRSSIRSSICoop. AlgorithmGeom. AlgorithmCW/IS+EPA
2018[ ]Wi-FiWi-FiF. printingRSSIKNNGeom. AlgorithmN/SI-LEPA
[ ]Wi-FiOther RFRSSITDoARSSI-B. M.Spatial Analysis-basedN/SI-LSPA
[ ]IMU, RFIDRFIDDR, RSSIRSSIPDR-B. M., RSSI-B. M.P. FilterCW/IEPA
[ ]Wi-FiWi-FiRSSI, ToA/ToFRSSI, ToA/ToFCoop. AlgorithmLSN/SN/SEPA
[ ]UWBUWBTWRTWoARangingB. PropagationN/SW/ISPA+CC
[ ]IMU, Wi-FiWi-Fi, BluetoothDR, RSSIRSSIPDR-B. M.Geom. AlgorithmDI-LEPA+E
[ ]IMUBluetooth, AcousticDRRSSIPDR-B. M.B. PropagationDI-LEPA+CC+R
[ ]Wi-FiWi-FiRSSIRSSIRangingB. PropagationN/SI-LSPA+CC
[ ]Wi-FiBluetoothF. printingRSSIK-mean clustering+R. ForestP. FilterCW/IEPA
[ ]IMUUWBDRTWRPDR-B. M.Bayesian F.DI-LS+EPA
[ ]IMU, Wi-FiWi-FiDR, F. printingF. printingPDR-B. M., KNNLeast Lost Matching EDI-LS+EPA
2017[ ]IMUOther RFDRPos. SharingPDR-B. M.Geom. AlgortihmCW/ISPA
[ ]LTELTETDoATWRCoop. AlgorithmP. FilterDI-LSPA
[ ]Hybrid S.Other RFHybrid Techni.RSSIHybrid MethodsEKFN/SI-LN/SPA
[ ]Laser+CompassLaser+CompassToA/ToFToA/ToFGeom. RangingGeom. AlgorithmCI-LEPA
[ ]IMU, Wi-FiBluetoothDR, RSSIRSSIPDR-B. M., RSSI-B. M.P. FilterN/SI-LSPA
[ ]Wi-FiWi-FiToA/ToFToA/ToFCoop. AlgorithmSemidefinite ProgrammingCI-LSPA+CC
[ ]IMUUWBDRRSSICoop. AlgorithmInfo. FilterN/SI-LEPA
2016[ ]IMUUWBDRRSSIPDR-B. M.P. FilterDI-LEPA+R
[ ]Wi-FiWi-FiF. printingToA/ToFKNNMax. Grad. DescendentDI-LSPA
[ ]IMU, Wi-FiWi-Fi, AcousticDR, F. printingRSSIPDR-B. M., KNNP. FilterCW/IS+EPA
[ ]IMURFIDDRRSSIPDR-B. M.P. FilterCW/IEPA
[ ]Wi-FiOther RFF. printingRSSIKullbakc-Leibler Div.Multidimensional ScalingCW/IEPA+R
[ ]UWBUWBTDoARSSICoop. AlgorithmEKFCI-LSPA+PP
2015[ ]UWBUWBRSSIMultiphath C.RangingEKFN/SN/SSPA+CC+R
[ ]Wi-FiWi-FiF. printingRSSIKNNSelf-organized mapDI-LSPA
[ ]Wi-FiWi-Fi, BluetoothRSSIRSSIRangingTrilaterationCI-LEPA+E
[ ]LTELTEUTDoAUTDoARangingNLLSCI-LSPA
[ ]Wi-FiWi-FiToA/ToFToA/ToFCoop. AlgorithmSemidefinite ProgrammingN/SN/SSPA
[ ]UWBUWBRSSIRSSIRangingSimulated AnnealingN/SW/ISPA
[ ]IMUWi-FiDRRSSIPDR-B. M.Semidefinite ProgrammingN/SI-LSPA
[ ]Wi-FiWi-FiToA/ToFToA/ToFRangingB. PropagationDW/IEPA+CC
[ ]Wi-FiBluetoothF. printingRSSIKNNEdge Spring M.CI-LSPA
[ ]Wi-FiBluetoothRSSIRSSIRSSI-B. M.TrilaterationCI-LEPA
2014[ ]IMUAcousticDRPos. SharingPDR-B. M.EKFDI-LEPA
[ ]IMUUWBDRRSSIPDR-B. M.EKFN/SI-LSPA
[ ]Wi-FiWi-FiRSSIRSSIRSSI-B. M.TrilaterationCI-LEPA
[ ]Wi-FiWi-FiRSSIRSSICoop. AlgorithmD. Stochastic Approx.DW/ISPA
[ ]Wi-FiBluetoothRSSIRSSIMultilatrationMax. Grad. DescendentN/SI-LSPA
[ ]Wi-FiBluetoothToA/ToFToA/ToFTrilaterationLSDI-LS+EPA
[ ]Wi-FiWi-FiRSSIRSSIRSSI-B. M.Self-organazed mapN/SW/ISPA
2013[ ]IMU, Wi-FiWi-FiDR, F. printingRSSI, F. printingPDR-B. M., F. printing-B. M.P. FilterN/SI-LSPA
[ ]IMUAcousticDRRSSIPDR-B. M.KFDI-LN/SPA
[ ]Wi-FiWi-FiF. printingRSSIF. printing-B. M.Likelihood func.C&DI-LSPA
[ ]UWBUWBRSSIRSSICoop. AlgorithmNon-Parametric B. PropagationN/SN/SSPA+CC
[ ]IMU, Wi-FiWi-FiDR, F. printingRSSIPDR-B. M., F. printing-B. M.P. FilterDI-LS+EPA+R
2012[ ]IMUBluetoothDRRSSIPDR-B. M.Bayessian F.DI-LEPA
[ ]IEEE.802.15.4a.CSSIEEE.802.15.4a.CSSTWRTWRRangingB. PropagationCW/IEPA+R
[ ]UWBUWBToA/ToFMultipath C.Coop. AlgorithmB. PorpagationC/DI-LSPA
2011[ ]Wi-FiWi-FiF. printingF. printingCoop. AlgorithmSelf-organized mapDI-LSPA+CC
[ ]Wi-FiWi-FiAoA, ToA/ToFAoA, ToA/ToFGeom. RangingGWFN/SI-LSR
[ ]IEEE.802.15.4a.CSSIEEE.802.15.4a.CSSRSSIRSSIMultilaterationTrilaterationDW/IEPA
[ ]Wi-FiWi-FiRSSIRSSIRangingP. FilterN/SN/SSPA
[ ]IMUMagnetic Resonant S.DRRSSIPDR-B. M.Probabilistic D. Distrib.DI-LN/SPA
[ ]CameraBluetoothQR CodeRSSIQR Code RecongnitionDevaluation Func.DW/IEPA
[ ]Wi-FiWi-FiRSSIRSSICoop. AlgorithmCoalitional GameDN/SSPA+CC
2010[ ]Wi-FiWi-FiRSSIRSSIRSSI-B. M.P. FilterCN/SSPA+CC
[ ]Wi-FiWi-FiRSSIRSSICoop. AlgorithmNLLSDI-LEPA+CC
2009[ ]Wi-FiBluetoothF. printingPos. SharingMax. Shared BorderMax. shared BorderDI-LN/SPA+CC
2007[ ]UWBUWBToA/ToFToA/ToFMultilaterationRec. Pos. Est.DI-LSPA
[ ]Wi-FiWi-FiRSSIRSSICoop. AlgorithmNLLSDI-LEPA
2006[ ]Wi-FiWi-FiF. printingRSSIF. printing-B. M.P. FilterDI-LEPA
[ ]Wi-FiWi-FiRSSIRSSIRSSI-B. M.D. Loc-coverageCI-LSPA

Author Contributions

Conceptualization, P.P., S.C. and J.T.-S.; methodology, P.P., S.C. and J.T.-S.; writing—original draft preparation, P.P., S.C. and J.T.-S.; writing—review and editing, S.C., J.T.-S., E.S.L. and J.N.; supervision, S.C., J.T.-S., E.S.L. and J.N. All authors have read and agreed to the published version of the manuscript.

The authors gratefully acknowledge funding from European Union’s Horizon 2020 Research and Innovation programme under the Marie Skłodowska Curie grant agreement No. 813278 (A-WEAR, http://www.a-wear.eu/ ). Sven Casteleyn is funded by the Ramón y Cajal Programme of the Spanish government, Grant No. RYC-2014-16606. Joaquín Torres-Sospedra is funded by the Torres Quevedo Programme of the Spanish government, Grant No. PTQ2018-009981.

Institutional Review Board Statement

Informed consent statement, data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Open access
  • Published: 03 May 2021

Indoor navigation: state of the art and future trends

  • Naser El-Sheimy 1 &
  • You Li   ORCID: orcid.org/0000-0003-3785-0976 1  

Satellite Navigation volume  2 , Article number:  7 ( 2021 ) Cite this article

25k Accesses

114 Citations

Metrics details

This paper reviews the state of the art and future trends of indoor Positioning, Localization, and Navigation (PLAN). It covers the requirements, the main players, sensors, and techniques for indoor PLAN. Other than the navigation sensors such as Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS), the environmental-perception sensors such as High-Definition map (HD map), Light Detection and Ranging (LiDAR), camera, the fifth generation of mobile network communication technology (5G), and Internet-of-Things (IoT) signals are becoming important aiding sensors for PLAN. The PLAN systems are expected to be more intelligent and robust under the emergence of more advanced sensors, multi-platform/multi-device/multi-sensor information fusion, self-learning systems, and the integration with artificial intelligence, 5G, IoT, and edge/fog computing.

Introduction

The Positioning, Localization, and Navigation (PLAN) technology has been widely studied and successfully commercialized in many applications such as mobile phones and unmanned systems. In particular, indoor PLAN technology is becoming increasingly important with the emergence of new chip-level Micro-Electromechanical System (MEMS) sensors, positioning big data, and Artificial Intelligence (AI) technology, as well as the increase of public interest and social potential.

The market value of indoor navigation: social benefits and economic value

The global indoor PLAN market is expected to reach $ 28.2 billion by 2024, growing at a Compound Annual Growth Rate (CAGR) of 38.2% (Goldstein 2019 ). Indoor PLAN has attracted the attention of not only consumer giants such as Apple and Google but also self-driving players such as Tesla and Nvidia. This is because the emerging vehicle applications (e.g., autonomous driving and connected vehicles) need indoor-PLAN capability. Compared with traditional vehicles, unmanned vehicles face three important problems: PLAN, environmental perception, and decision-making. A vehicle needs to PLAN itself into the surrounding environment before making decisions. Therefore, only by solving the indoor PLAN can fully autonomous driving and location services be achieved.

Social benefits Accurate PLAN can serve safety and medical applications and benefit special groups such as the elderly, children, and the disabled. Meanwhile, PLAN technology can bring a series of location services, such as Mobility as a Service (MaaS), which increases travel convenience and security, and reduces carbon emission (through changing owned vehicles to shared ones). Also, reliable PLAN technology can reduce road accidences, 94% of which are caused by human errors (Singh 2015 ).

Economic values As a demander of indoor PLAN, autonomous driving technology is expected to reduce the ratio of owned to shared vehicles to 1:1 by 2030 (Schönenberger 2019 ). By 2050, autonomous cars will be expected to bring savings of 800 billion dollars annually by reducing congestion, accidents, energy consumption, and time consumption (Schönenberger 2019 ). The huge social and economic benefits promote the demand for PLAN technology facing the autonomous driving and mass consumer markets.

Classification of indoor navigation from market perspective

PLAN technology is highly related to market demand. Table 1 shows the accuracy requirements and costs of several typical indoor PLAN applications.

In general, for the applications that require higher accuracy, the facilities and equipment costs are correspondingly higher. In many scenarios (e.g., the mass-market ones), the minimum equipment installation cost and equipment cost are important factors that limit the scalability of PLAN technology.

Industry and construction require the PLAN accuracy at the centimeter- or even millimeter-level. For example, the accuracy requirements for machine guidance and deformation analysis are 1–5 cm and 1–5 mm, respectively. The corresponding cost is in the $ 10,000 level (Schneider 2010 ).

Compared with industry and construction, the PLAN accuracy requirements for autonomous driving are lower. However, the application scene is much larger and has more complex changes; also, the cost is more restrictive. Such factors increase the challenge of PLAN in autonomous driving. The Society of Automotive Engineers divides autonomous driving into L0 (no automation), L1 (driver assistance), L2 (partial automation), L3 (conditional automation, which requires drivers to be ready to take over when the vehicle has an emergency alert), L4 (high automation, which does not require any user intervention but is only limited to specific operational design domains, such as areas with specific facilities and High-Definition maps (HD maps), and L5 (fully automation) (SAE-International 2016 ). In most situations, autonomous cars mean L3 and above. There is still a certain distance from L5 commercial use (Wolcott and Eustice 2014 ). An important bottleneck is that PLAN technology is difficult to meet the requirements in the entire environment.

There are various derivations and definitions of the accuracy requirement of autonomous driving. Table 2 lists several of those derivations and definitions.

The research work (Basnayake et al. 2010 ) shows the accuracy requirements in Vehicle-to-Everything (V2X) applications for which-road (within 5 m), which-lane (within 1.5 m), and where-in-lane (within 1.0 m). The National Highway Safety Administration (NHTSA 2017 ) reports a requirement of 1.5 m (1 sigma, 68% probability) tentatively for lane-level information for safety applications. The research work (Reid et al. 2019 ) derives an accuracy requirement based on road geometry standards and vehicle dimensions. For passenger vehicle operating, the bounds of lateral and longitudinal position errors are respectively 0.57 m (95% probability in 0.20 m) and 1.40 m (95% probability in 0.48 m) on freeway roads, and both 0.29 m (95% probability in 0.10 m) on local streets. In contrast, the research work (Levinson and Thrun 2010 ) believes that centimeter positioning accuracy (with a Root Mean Square (RMS) error of within 10 cm) is sufficient for public roads, while the report (Agency 2019 ) defines the accuracy for autonomous driving to be within 20 cm in horizontal and within 2 m in height. Meanwhile, the research work (Stephenson 2016 ) reports that active vehicle control in ADAS and autonomous driving applications require an accuracy better than 0.1 m. Beyond research, the goal for autonomous driving is set at the centimeter-level by many autonomous-driving companies (e.g., (Nvidia 2020 )). To summarize, autonomous driving requires the PLAN accuracy at decimeter-level to centimeter-level. The current cost is in the order of $ 1000 to $ 10,000 (when using three-Dimensional (3D) Light Detection and Ranging (LiDAR)).

For indoor mapping, the review paper (Cadena et al. 2016 ) shows that the accuracy within 10 cm is sufficient for two-Dimensional (2D) Simultaneous Localization and Mapping (SLAM). Indoor mapping is commonly conducted with a vehicle that moves slower in a smaller area when compared with autonomous driving. The cost of a short-range 2D LiDAR for indoor mapping is in the order of $ 1000.

The research work (Rantakokko et al. 2010 ) illustrates that first responders require indoor PLAN accuracy of 1 m in horizontal and within 2 m in height. The cost for first responders is at the $ 1,000-level.

For mass-market applications, it is difficult to find a standard of PLAN accuracy requirement. An accepted accuracy classification is that 1–5 m is high, 6–10 m is moderate, and over 11 m is low (Dodge 2013 ). The vertical accuracy requirement is commonly on the floor-level. For such applications, it is important to use existing consumer equipment and reduce base station deployment costs. On average, the deployment in a 100 m 2 -level area costs approximately $ 10-level. The E-911 cellular emergency system uses cellular signals and has an accuracy requirement of 80% for an error of 50 m (FCC 2015 ).

The cost of indoor PLAN applications depends on the sensors used. The main sensors and solutions will be introduced in the following section.

Main players of indoor navigation

Various researchers and manufacturers investigate indoor PLAN problems from different perspectives.

Table 3 lists the selected research works that can reflect the typical navigation accuracy for different sensors, while Table 4 shows the selected players from the industrial. The primary sensor, reported accuracy, and sensor costs are covered.

The actual PLAN performance is related to the factors such as infrastructure deployment (e.g., sensor type and deployment density), sensor grade, environment factors (e.g., the significance of features and area size), and vehicle dynamics.

In general, different types of sensors have various principles, measurement types, PLAN algorithms, performances, and costs. It is important to select the proper sensor and PLAN solution according to requirements.

State of the art

To achieve an accurate and robust PLAN for autonomous vehicles, multiple types of sensors and techniques are required. Figure  1 shows part of the PLAN sensors that have been in autonomous cars. This section summarizes the state-of-the-art sensors and PLAN techniques.

figure 1

Part of PLAN sensors on an autonomous vehicle

Sensors for indoor navigation

The sensors include environmental monitoring and awareness sensors (e.g., HD map, LiDAR, RAdio Detection and Ranging (RADAR), camera, WiFi/BLE, 5G, and Low-Power Wide-Area Network (LPWAN)), and the navigation sensors (e.g., Inertial Navigation Systems (INS) and GNSS). The advantages and challenges for each sensor are also introduced and compared.

Environmental monitoring and awareness sensors (aiding sensors for navigation system)

Car-mounted road maps have been successfully commercialized since the beginning of this century. Also, companies such as Google and HERE have launched indoor maps for public places. These maps contain roads, buildings, and Point-of-Interest (POI) information and commonly have meter-level to decimeter-level accuracy. The main purpose of these maps is to assist people to navigate and perform location service applications. The main approaches for generating these maps are satellite imagery, land-based mobile mapping, and onboard GNSS crowdsourcing.

In the past decade, HD maps have received extensive attention. An important reason is that traditional maps are designed for people, not machines. Therefore, the accuracy of the traditional map cannot meet the requirements of autonomous driving. Also, the traditional map does not contain enough real-time information for autonomous driving, which requires not only information about the vehicle, but also information about external facilities (Seif and Hu 2016 ). With these features, the HD map is not only a map but also a "sensor" for PLAN and environment perception. Table 5 compares the traditional map and HD map.

HD map is key to autonomous driving. It is generally accepted that HD maps require centimeter-level accuracy and ultra-high (centimeter-level or higher) resolution. Accordingly, creating HD maps is a challenge. The creation and updating of the current HD maps are dependent on professional vehicles equipped with high-end LiDAR, cameras, RADARs, GNSS, and INS. For example, Baidu spent 5 days building an HD map in a Beijing park by using million-dollar-level mapping vehicles (Synced 2018 ). Such a generation method is costly; also, it is difficult to update an HD map continuously.

To mitigate the updating issue, crowdsourcing based on car-mounted cameras has been researched. This method can lower the requirement of extra data collection if the images from millions of cars are used properly. However, this task is extremely challenging. First, it is difficult to obtain the PLAN solutions that are accurate enough for HD map updating with crowdsource data. Furthermore, to update the HD map in an area effectively where changes have occurred, there are challenges in transmitting, organizing, and processing massive crowdsourced data. For example, one hour of autonomous driving may collect one terabyte of data (Seif and Hu 2016 ). It takes 230 days to transfer one week’s autonomous driving data using WiFi (MachineDesign 2020 ). Thus, dedicated onboard computing chips, high-efficiency communication, and edge computing are needed. Therefore, crowdsourcing HD maps requires cooperation from car manufacturers, map manufacturers, 5G manufacturers, and terminal manufacturers (Abuelsamid 2017 ).

LiDAR systems use laser light waves to measure distances and generate point clouds (i.e., a set of 3D points). The distance is computed by measuring the time of flight of a light pulse, while the direction of a transmitted laser is tracked by gyros. By matching the measured point cloud with that stored in a database, an object can be located.

LiDAR is an important PLAN sensor on unmanned vehicles and robots. Figure  2 compares the PLAN-related performance of the camera, LiDAR, and RADAR.

figure 2

Comparison of camera, LiDAR, and RADAR performance

The main advantages are its high accuracy and data density. For example, the Velodyne HDL-64E LiDAR has a measurement range of over 120 m, with ranging accuracy of 1.5 cm (1 sigma) (Glennie and Lichti 2010 ). The observation can cover 360° horizontally, with up to 2.2 million points per second (Velodyne 2020 ). Such features make LiDAR a strong candidate in providing high-definition surrounding environment information.

The main challenges of using LiDAR are the high price and large size. Also, the current LiDAR system has a rotation mechanism on the top of the carrier, which may have a problem in its life span. Some manufacturers try to use solid-state LiDAR to alleviate these problems. Apple unveils a new iPad Pro with a LiDAR scanner, which may bring new directions to indoor PLAN.

LiDAR measurements are used for PLAN through 2D or 3D matching. For example, the research works (de Paula Veronese et al. 2016 ) and (Wolcott and Eustice 2017 ) match LiDAR measurements with a 2D grid map and a 3D point cloud map, respectively. The PLAN performance is generally better when the surrounding environment features are significant and distinct from other places; otherwise, performance is limited. The LiDAR measurement performance will not be affected by light but may be affected by weather conditions.

Cameras are used for PLAN and perception by collecting and analyzing images. Compared with LiDAR and RADAR, the camera has a much lower cost. Also, the camera has the advantages such as rich feature information and color information. Also, the camera is a passive sensing technology, which does not transmit signals and thus does not have errors on the signal-propagation side. Moreover, the current 2D computer vision algorithm is more advanced, which has also promoted the application of cameras.

Similar to LiDAR, the camera depends on the significance of environmental features. Also, the camera is more susceptible to weather and illumination conditions. Its performance degrades under harsher conditions, such as in darkness, rain, fog, and snow. Thus, it is important to develop camera sensors with self-cleaning, longer dynamic range, better low light sensitivity, and higher near-infrared sensitivity. Furthermore, the amount of raw camera data is large. Multiple cameras on an autonomous vehicle can generate gigabyte-level raw data every minute or even every second.

Some PLAN solutions use cameras, instead of a high-end LiDAR, to reduce hardware cost. An example is Tesla's autopilot system (Tesla 2020 ). This system contains many cameras, including three forward cameras (wide, main, and narrow), four side cameras (forward and rearward), and a rear camera. To assure the PLAN performance in the environments that are challenging for cameras, RADARs and ultrasonic sensors are used.

The two main camera-based PLAN approaches are visual odometry/SLAM and image matching. For the former, the research work (Mur-Artal and Tardós 2017 ) can support visual SLAM using monocular, stereo, and Red–Green–Blue-Depth (RGB-D) cameras. For image matching, road markers, signs, poles, and artificial features (e.g., Quick Response (QR) codes) can be used. The research work (Gruyer et al. 2016 ) uses two cameras to take the ground road marker and match it with a precision road marker map. In contrast, the research works (Wolcott and Eustice 2014 ) and (McManus et al. 2013 ) respectively use images from monocular and stereo cameras to match the 3D point cloud map generated by a survey vehicle equipped with 3D LiDAR scanners.

RADAR has also received intensive attention in the autonomous driving industry. Similar to LiDAR, the RADAR determines the distance by measuring the round-trip time difference of the signal. The difference is that the RADAR emits radio waves, instead of laser waves. Compared with LiDAR, the RADAR generally has a further measurement range. For example, the Bosch LRR RADA can reach up to 250 m. Also, the price of a RADAR system has dropped to the order of $ 1,000 to $ 100. Moreover, RADAR systems are lightweight, which makes it possible to embed them in cars.

On the other hand, the density of RADAR measurements is much lower than that of LiDARs and cameras. Therefore, RADAR is often used for obstacle avoidance, rather than as the main sensor of PLAN. Similar to LiDAR, the measurement performance of RADAR is not affected by light but may be affected by weather conditions.

WiFi and BLE are the most widely used indoor wireless PLAN technologies for consumer electronics. The commonly used observation is RSS (Zhuang et al. 2016 ), and the typical positioning accuracy is at meter-level. Also, researchers have extracted high-accuracy measurements, such as CSI (Halperin et al. 2011 ), RTT (Ciurana et al. 2007 ), and AoA (Quuppa 2020 ). Such measurements can be used for decimeter-level or even centimeter-level PLAN.

A major advantage of WiFi systems is that they can use existing communication facilities. In contrast, BLE is flexible and convenient to deploy. To meet the future Internet-of-Things (IoT) and precise localization requirements, new features have been added to both the latest WiFi and BLE technologies. Table 6 lists the new WiFi, BLE, 5G, and LPWAN features that can enhance PLAN. WiFi HaLow (WiFi-Alliance 2020 ) and Bluetooth long range (Bluetooth 5) (Bluetooth 2017 ) are released to improve the signal range, while WiFi RTT (IEEE 802.11 mc) (IEEE 2020) and Bluetooth direction finding (Bluetooth 5.1) (Bluetooth 2019 ) have been released for precision positioning.

5G has attracted intensive attention due to its high speed, high reliability, and low latency in communication. Compared with previous cellular technologies, 5G has defined three application categories (Restrepo 2020 ), including Ultra-Reliable and Low-Latency Communication (URLLC) for high-reliability (e.g., 99.999% reliable under 500 km/h high-speed motion) and low-latency (e.g., millisecond-level) scenarios (e.g., vehicle networks, industrial control, and telemedicine), enhanced Mobile Broad Band (eMBB) for high-data-rate (e.g., gigabit-per-second-level, with a peak of 10 gigabits-per-second) and strong mobility scenarios (e.g., video, augmented reality, virtual reality, and remote officing), and massive Machine-Type Communication (mMTC) for application scenarios (e.g., intelligent agriculture, logistics, home, city, and environment monitoring) that have massive nodes which have a low cost, low power consumption, and low data rate.

5G has strong potential to change the cellular-based PLAN. First, the coverage range of 5G base stations may be shrunk from kilometers to hundreds of meters or even within 100 m (Andrews et al. 2014 ). The increase of base stations will enhance the signal geometry and mitigate Non-Line-of-Sight (NLoS) conditions. Second, 5G has new features, including mmWave Multiple-Input and Multiple-Output (MIMO), large-scale antenna, and beamforming. These features make it possible to use multipath signals to enhance PLAN (Witrisal et al. 2016 ). Third, 5G may introduce device-to-device communication (Zhang et al. 2017a ), which makes cooperative PLAN possible.

Meanwhile, the newly-emerged IoT signals and the Low-Power Wide-Area Network (LPWAN, e.g., long-range (LoRa), Narrow Band-IoT (NB-IoT), Sigfox, and Long Term Evolution for Machines (LTE-M) have the advantages such as long-range, low-cost, low-power-consumption, and massive connections (Li et al. 2020a ). Figure  3 demonstrates the communication ranges of 5G and LPWAN signals, with a comparison with other wireless technologies.

figure 3

Signal ranges of 5G, LPWAN, and other wireless technologies (Li et al. 2020a )

5G and LPWAN systems provide a possibility for the wide-area localization in indoor and urban areas. Similar to 5G, LPWAN systems no longer require an extra communication module that costs $ 10 level in the current PLAN systems. LPWAN signals are compatible with more and more smart home appliances. These nodes will increase the deployment density of IoT networks and thus enhance PLAN performance. Also, it is feasible to add new measurement types (e.g., TDoA (Leugner et al. 2016 ) and AoA (Badawy et al. 2014 )) into the 5G and LPWAN base stations.

Most of the existing research on 5G and LPWAN based PLAN is based on theoretical analysis and simulation data because there are limited real systems. The standard for mmWave signal has been late and therefore it is difficult to find the hardware for experimenting. The accuracy ranges from 100-m-level to centimeter-level, depending on the base station deployment density and the type of measurement used. The survey paper (Li et al. 2020a ) provides a systematic review of 5G and LPWAN standardizations, PLAN techniques, error sources, and mitigation. In particular, it summarizes the PLAN errors by end-device-related errors, environment-related errors, base-station-related errors, and data-related errors. It is important to mitigate these error sources when using 5G and LPWAN signals for PLAN purposes.

There are indoor PLAN solutions based on other types of environmental signals, such as the magnetic (Kok and Solin 2018 ), acoustic (Wang et al. 2017 ), air pressure (Li et al. 2018 ), visible light (Zhuang et al. 2019 ), and mass flow (Li et al. 2019a ).

Navigation and positioning sensors

Inertial navigation system.

An INS derives motion states by using angular-rate and linear specific-force measurements from gyros and accelerometers, respectively. The review paper (El-Sheimy and Youssef 2020 ) summarizes the state of the art and future trends of inertial sensor technologies. INS is traditionally used in professional applications such as military, aerospace, and mobile surveying. Since the 2000s, low-cost MEMS-based inertial sensors were introduced into the PLAN of land vehicles (El-Sheimy and Niu 2007a , b ). Since the release of the iPhone 4, MEMS-based inertial sensors have become a standard feature on smartphones and have brought in new applications such as gyro-based gaming and pedestrian indoor PLAN. Table 7 compares a typical inertial sensor performance in mobile mapping and mobile phones. Different grades of inertial sensors have various performances and costs. Thus, it is important to select a proper type of inertial sensors according to application requirements.

The INS can provide autonomous PLAN solutions, which means it does not require the reception of external signals or the interaction with external environments. Such a self-contained characteristic makes it a strong candidate to ensure PLAN continuity and reliability when the performances of other sensors are degraded by environmental factors. An important error source for INS-based PLAN is the existence of sensor errors, which will accumulate and lead to drifts in PLAN solutions. There are deterministic and stochastic sensor errors. The impact of deterministic errors (e.g., biases, scale factor errors, and deterministic thermal drifts) may be mitigated through calibration or on-line estimation (Li et al. 2015 ). In contrast, stochastic sensor errors are commonly modeled as stochastic processes (e.g., white noises, random walk, and Gaussian–Markov processes) (Maybeck 1982 ). The statistical parameters of stochastic models can be estimated by the methods such as power spectral density analysis, Allan variance (El-Sheimy et al. 2007 ), and wavelet variance (Radi et al. 2019 ).

Global navigation satellite system (as an initializer)

GNSS localizes a receiver using satellite multilateration. It is one of the most widely used and most well-commercialized PLAN technology. Standalone GNSS and GNSS/INS integration are the mainstream PLAN solutions for outdoor applications. In autonomous driving, the GNSS transfers from the primary PLAN sensor to the second core. The main reason is that GNSS signals may be degraded in urban and indoor areas. Even so, high-precision GNSS is still important to provide an initial localization to reduce the searching space and computational load of other sensors (e.g., HD map and LiDAR) (Levinson et al. 2007 ).

The previous boundaries between high-precision professional and mass-market GNSS uses are blurring. A piece of evidence is the integration between high-precision GNSS techniques and mass-market chips. Also, the latest smartphones are being able to provide high-precision GNSS measurements and PLAN solutions.

Table 8 lists the main GNSS positioning techniques. Single Point Positioning (SPP) and Differential-GNSS (DGNSS) are based on pseudo-range measurements, while Real-Time Kinematic (RTK), Precise Point Positioning (PPP), and PPP with Ambiguity Resolution (PPP-AR) are based on carrier-phase measurements. DGNSS and RTK are relative positioning methods that mitigate some errors by differencing measurements across the rover and base receivers. In contrast, PPP and PPP-AR provide precise positioning at a single receiver by using precise satellite orbit correction, clock correction, and parameter-estimation models. They commonly need minutes for convergence (Trimble 2020 ).

There are other types of PLAN sensors, such as magnetometer, odometer, UWB, ultrasonic, and pseudolite. In recent years, there appears relatively low-cost UWB and ultrasonic sensors (e.g., (Decawave 2020 ; Marvelmind 2020 ). Such sensors typically can provide a decimeter-level ranging accuracy within a distance of 30 m. Also, Apple has built a UWB module into the iPhone 11, which may bring new opportunities for indoor PLAN. To summarize, Table 9 illustrates the principle, advantages, and disadvantages of the existing PLAN sensors.

Techniques and algorithms for indoor navigation

The PLAN techniques include position-fixing, Dead-Reckoning (DR), database matching, multi-sensor fusion, and motion constraints. Figure  4 demonstrates the indoor PLAN techniques. The details are provided in the following subsections.

figure 4

Techniques for indoor navigation

Position-fixing techniques

Geometrical position-fixing methods have been widely applied over the past few decades, especially in the field of satellite positioning and wireless sensor networks. The basic principle is the geometric calculation of distance and angle measurements. By the type of measurement, position-fixing methods include range-based (e.g., multilateration, min–max, centroid, proximity, and hyperbolic positioning), angle-based (e.g., multiangulation), and angle-and-range-based (e.g., multiangulateration). Figure  5 shows the basic principle of these methods.

figure 5

Principle of position-fixing methods

Range-based methods

The location of a device can be estimated by measuring its distance to at least three base stations (or satellites) whose locations are known. The most typical method is multilateration (Guvenc and Chong 2009 ), which is geometrically the intersection of multiple spheres (for 3D positioning) or circles (for 2D positioning). Also, the method has several simplified versions. For example, the min–max method (Will et al. 2012 ) computes the intersection of multiple cubes or squares, while the centroid method (Pivato et al. 2011 ) calculates the weighted average of multiple base station locations. Moreover, the proximity method (Bshara et al. 2011 ) is a further simplification by using the location of the closest base station. Meanwhile, the differences of device-base-station ranges can be used to mitigate the influence of device diversity and some signal-propagation errors (Kaune et al. 2011 ).

For position-fixing, the base station location is usually set manually or estimated using base-station localization approaches (Cheng et al. 2005 ). The distances between the device and the base stations are modeled as Path-Loss Models (PLMs) and parameters are estimated (Li 2006 ). To achieve accurate ranging, it is important to mitigate the influence of error sources (e.g., ionospheric errors, troposphere errors, wall effects, and human body effects). In addition, it is necessary to reduce the influence of end-device factors (e.g., device diversity).

The research work (Petovello 2003 ) describes the range-based PLAN algorithm and its quality control. Meanwhile, the research work (Langley 1999 ) proposes an index (i.e., the dilution of precision) for the evaluation of signal geometry. A strong geometry is a necessary condition, instead of a necessary and sufficient condition, for accurate range-based localization because there are other error sources, such as the stochastic ones.

Angle-based methods

Triangulation, a typical AoA based PLAN method, computes the device location by using the direction measurements to multiple base stations that have known locations (Bai et al. 2008 ). When direction measurement uncertainty is considered, the direction measurement from two base stations will intersect to a quadrilateral. The research work (Wang and Ho 2015 ) provides a theoretical derivation and performance analysis of the triangulation method.

Angle-based PLAN solution can typically provide high accuracy (e.g., decimeter-level) in a small area (e.g., 30 m by 30 m) (Quuppa 2020 ). The challenge is that AoA systems require specific hardware (e.g., an array of antennae and a phase-detection mechanism) (Badawy et al. 2014 ), which is complex and costly. There are low-cost angle-based solutions such as that use RSS measurements from multiple directional antennae (Li et al. 2020b ). However, for wide-area applications, both the angle measurement and PLAN accuracy are significantly degraded. The Bluetooth 5.1 (Bluetooth 2019 ) has added the direction measurement, which may change the angle-based PLAN.

Angle-and-range-based methods

Multiangulateration, a typical angle-and-range-based PLAN method, calculate the device location by using its relative direction and distance to a base station that has a known position. This approach is widely used in engineering surveying. For indoor PLAN, a solution is to localize a device by its direction to a ceiling-installed AoA base station (Quuppa 2020 ) and known ceiling height. This approach is reliable, and it reduces the dependence on the number of base stations. However, the cost is high when using in wide-area applications.

In general, geometrical position-fixing methods are suitable for the environments (e.g., outdoors and open indoors) that can be well modeled and parameterized. By contrast, it is more challenging to use such methods in complex indoor and urban areas due to the existence of error sources such as multipath, NLoS conditions, and human-body effects. The survey paper (Li et al. 2020a ) has a detailed description of the error sources for position-fixing methods. It is difficult to alleviate the device-, signal-propagation-, and base-station-related error sources by the position-fixing technique itself. Thus, it is common to integrate with other PLAN techniques, such as DR and database matching.

Dead-reckoning techniques

The basic principle of DR technology is to derive the current navigation state by using the previous navigation state and the angular and linear movements. The angular and linear movements can be obtained by using the measurements of sensors such as inertial sensors, cameras, magnetometers, and odometers. Among them, inertial sensors are most widely used for DR. There are two main DR algorithms based on inertial sensors: INS mechanization and PDR. The former is widely used in land-vehicle, airborne, and shipborne PLAN applications, while the latter is a common method for pedestrian navigation. Figure  6 shows the flow of the INS mechanization and PDR algorithms. INS can provide 3D navigation results, while PDR is a 2D navigation method.

figure 6

Diagram of INS mechanization and PDR algorithms

The INS mechanization works on the integration of 3D angular rates and linear accelerations (Titterton et al. 2004 ). The gyro-measured angular rates are used to continuously track the 3D attitude between the sensor frame and the navigation frame. The obtained attitude is then utilized to transform the accelerometer-measured specific forces to the navigation frame. Afterward, the gravity vector is added to the specific force to obtain the acceleration of the device in the navigation frame. Finally, the acceleration is integrated once and twice to determine the 3D velocity and position, respectively. Therefore, the residual gyro and accelerometer biases in general cause position errors proportional to time cubed and time squared, respectively.

In contrast, the PDR algorithm (Li et al. 2017 ) determines the current 2D position by using the previous position and the latest heading and step length. Thus, it consists of platform-heading estimation, step detection, and step-length estimation. The platform heading is usually calculated by adding the device-platform misalignment (Pei et al. 2018 ) into the device heading, which can be tracked by an Attitude and Heading Reference System (AHRS) algorithm (Li et al. 2015 ). The steps are detected by finding periodical characteristics in accelerometer and gyro measurements (Alvarez et al. 2006 ), while the step length is commonly estimated by training a model that contains walking-related parameters (e.g., leg length and walking frequency) (Shin et al. 2007 ).

There are DR algorithms based on other types of sensors, such as visual odometry (Scaramuzza and Fraundorfer 2011 ) and wheel odometry (Brunker et al. 2018 ). Magnetometers (Gebre-Egziabher et al. 2006 ) are also used for heading determination.

To achieve a robust long-term DR solution, there are several challenges, including the existence of sensor errors (Li et al. 2015 ), the existence of the misalignment angle between device and platform (Pei et al. 2018 ), and the requirement for position and heading initialization. Also, the continuity of data is very important for DR. In some applications, it is necessary to interpolate, smooth, or reconstruct the data (Kim et al. 2016 ).

DR has become a core technique for continuous and seamless indoor/outdoor PLAN due to its self-contained characteristics and robust short-term solutions. It is strong in either complementing other PLAN techniques when they are available or bridging their signal outages and performance-degradation periods.

Database-matching techniques

The principle for database matching is to compute the difference between the measured fingerprints and the reference fingerprints in the database and find the closest match (Li et al. 2020a ). Database-matching techniques are used to process data from various sensors, such as cameras, LiDAR, wireless sensors, and magnetometers. The database-matching process consists of the steps of feature extraction, database learning, and prediction. Figure  7 demonstrates the processes. First, valuable features are extracted from raw sensor signals. Afterward, features at multiple reference points are combined to generate a database. Finally, the real-time measured features are compared with those in the database to localize the device.

figure 7

Diagram of database matching process

According to the dimensions of measurements and the database, database-matching algorithms can be divided into the 1D (measurement)-to-2D (database) matching, the 2D-to-2D matching, the 2D-to-3D matching, and the 3D-to-3D matching. In the 1D-to-2D matching, the real-time feature measurement can be expressed as a vector, while the database is a matrix. Such a matching approach has been used to match features such as wireless RSS (Li et al. 2017 ) and magnetic intensity (Li et al. 2018 ). Examples of the 2D-to-2D matching are the matching of real-time image features (e.g., road markers) and an image feature database (e.g., a road marker map) (Gruyer et al. 2016 ), and the matching of 2D LiDAR points and a grid map (de Paula Veronese et al. 2016 ). By contrast, the 2D-to-3D matching is a current hot spot. For example, it matches images to a 3D point cloud map (Wolcott and Eustice 2014 ). Finally, an example of the 3D-to-3D matching is the matching of 3D LiDAR measurements and a 3D point cloud map (Wolcott and Eustice 2017 ).

According to the prediction algorithm, database-matching algorithms can be divided into the deterministic (e.g., nearest neighbors (Lim et al. 2006 ) and Iterative Closest Point (ICP) (Chetverikov et al. 2002 )) and stochastic (e.g., Gaussian distribution (Haeberlen et al. 2004 ), Normal Distribution Transform (NDT) (Biber and Straßer 2003 ), histogram (Rusu et al. 2008 ), and machine-learning-based) ones. Machine learning methods, such as Artificial Neural Network (ANN) (Li et al. 2019b ), random forests (Guo et al. 2018 ), Deep Reinforcement Learning (DRL) (Li et al. 2019c ), and Gaussian Process (GP) (Hähnel and Fox 2006 ), have also been applied.

With the rapid development of machine-learning techniques and the diversity in modern PLAN applications, database matching has been attracted even more attention than geometrical methods. The database matching methods are suitable for scenarios that are difficult to model or parameterize. On the other hand, the inconsistency between real-time measurement and the database is the main error source in database matching. Such inconsistency may be caused by the existence of new environments and varying environments and other factors. The survey paper (Li et al. 2020a ) has a detailed description of the error sources for database matching.

Multi-sensor fusion

The diversity and redundancy of sensors are essential to ensure a high level of robustness and safety of the PLAN system. This is because various sensors have different functionalities. In addition to their primary functionality, each sensor has at least one secondary functionality to assist the PLAN of other sensors. Table 10 shows the primary and second functionality of different sensors in terms of PLAN.

Due to their various functionalities, different sensors provide different human-like senses. Table 11 lists PLAN sensors corresponding to different senses of the human body. The same type of human-like sensors can provide a backup or augmentation to one another. Meanwhile, the different types of human-like sensors are complementary. Thus, by fusing data from a diversity of sensors, extra robustness and safety can be achieved.

To be specific, for position-fixing and database-matching methods, the loss of signals or features lead to outages in the PLAN solution. Also, changes in the model and database parameters may degrade the PLAN performance. To mitigate these issues, DR techniques can be used (El-Sheimy and Niu 2007a , b ). Moreover, the use of other techniques can enhance position-fixing through more advanced base station position estimation (Cheng et al. 2005 ), propagation-model estimation (Seco and Jiménez 2017 ), and device diversity calibration (He et al. 2018 ). Also, the number of base stations required can be reduced (Li et al. 2020b ). On the other hand, position-fixing and database-matching techniques can provide initialization and periodical updates for DR (Shin 2005 ), which in turn calibrate sensors and suppress the drift of DR results.

Database matching can also be enhanced by other techniques. For example, the position-fixing method can be used to reduce the searching space of database-matching (Zhang et al. 2017b ), predict the database in unvisited areas (Li et al. 2019d ), and predict the uncertainty of database-matching results (Li et al. 2019e ). Also, a more robust PLAN solution may be achieved by integrating position-fixing and database-matching techniques (Kodippili and Dias 2010 ).

From the perspective of integration mode, there are three levels of integration. The first level is loosely coupling (Shin 2005 ), which fuses PLAN solutions from different sensors. The second level is tightly-coupling (Gao et al. 2020 ), which fuses various sensor measurements to obtain a PLAN solution. The third level is ultra-tightly-coupling, which using the data or results from some sensors to enhance the performance of other sensors.

Motion constraints

Motion constraints are used to enhance PLAN solutions from the perspective of algorithms, instead of adding extra sensors. Such constraints are especially useful for low-cost PLAN systems that are not affordable for extra hardware costs. For land-based vehicles, the Non-Holonomic Constraints (NHC) can improve the heading and position accuracy significantly when the vehicle moves with enough speed (Niu et al. 2010 ), while the Zero velocity UPdaTe (ZUPT) and Zero Angular Rate Update (ZARU, also known as Zero Integrate Heading Rate (ZIHR)) respectively provide zero-velocity and zero-angular-rate constraints when the vehicle is quasi-static (Shin 2005 ). When the vehicle moves at low speed, a steering constraint can be applied (Niu et al. 2010 ). Moreover, there are other constraints such as the height constraint (Godha and Cannon 2007 ) and the four-wheel constraint (Brunker et al. 2018 ).

For pedestrian navigation, ZUPT (Foxlin 2005 ) and ZARU (Li et al. 2015 ) are most commonly used. Also, the NHC and step velocity constraint (Zhuang et al. 2015 ) have been applied. Furthermore, in indoor environments, constraints such as the corridor-direction constraint (Abdulrahim et al. 2010 ), the height constraint (Abdulrahim et al. 2012 ), and the human-activity constraint (Zhou et al. 2015 ) are useful to enhance the PLAN solution.

Multi-sensor-based indoor navigation has been utilized in various applications, such as pedestrians, vehicles, robots, animals, and sports. This chapter introduces some examples. Three of our previous cases on indoor navigation are demonstrated. The used vehicle platforms include smartphones, drones, and robots.

Smartphones

This case uses an enhanced information-fusion structure to improve smartphone navigation (Li et al. 2017 ). The experiment uses the built-in inertial sensors, WiFi, and magnetometers of smartphones. By combining the advantages of PDR, WiFi database matching, and magnetic matching, a multi-level quality-control mechanism is introduced. Some quality controls are presented based on the interaction of sensors. For example, wireless positioning results are used to limit the search scope for magnetic matching, to reduce both computational load and mismatch rate.

The user carried a mobile phone and navigated in a modern office building (120 m by 60 m) for nearly an hour. The smartphone has experienced multiple motion modes, including handheld horizontally, dangling with hand, making a call, and in a trouser pocket.

The position results are demonstrated in Fig.  8 . When directly fusing the data from PDR, WiFi, and magnetic in a Kalman filter, the results suffer from large position errors. The ratio of large position errors (greater than 15 m) reached 33.4%. Such a solution is not reliable enough for user navigation. By using the improved multi-source fusion, the ratio of large errors was reduced to 0.8%. This use case indicates the importance of sensor interaction and robust multi-sensor fusion.

figure 8

Inertial/ WiFi/ magnetic integrated smartphone navigation results (modified on the results reported in Li et al. ( 2017 ))

This use case integrated a low-cost IMU, a barometer, a mass-flow sensor, and ultrasonic sensors for indoor drone navigation (Li et al. 2019a ). The forward velocity from the mass flow sensor and the lateral and vertical NHC can be utilized for 3D velocity updates.

Figure  9 shows the test scenario and selected results. Indoor flight tests were conducted in a 20 m by 20 m area with a quadrotor drone, which was equipped with an InvenSense MPU6000 IMU, a Honeywell HMC 5983 magnetometer triad, a TE MS5611 barometer, a Sensirion SFM3000 mass-flow sensor, and a Marvelmind ultrasonic beacon. Additionally, four ultrasonic beacons were installed on four static leveling pillars, with a height of 4 m.

figure 9

INS/Barometer/Mass-flow/Ultrasonic integrated navigation (modified on the results reported in Li et al. ( 2019a ))

When ultrasonic ranges were used, the system achieved a continuous and smooth navigation solution, with an approximate navigation accuracy of a centimeter to decimeter level. However, during ultrasonic signal outages, the accuracy was degraded to 0.2, 0.6, 1.0, 1.3, 1.8, and 4.3 m in the mean value when navigating for 5, 10, 15, 20, 30, and 60 s, respectively.

This use case integrated a photodiode and a camera indoor robot navigation (Zhuang et al. 2019 ). Figure  10 shows the test platform and selected results. The size of the test area was 5 m by 5 m by 2.84 m, with five CREE T6 Light-Emitting-Diodes (LEDs) mounted evenly on the ceiling as light beacons. The receiver used in the experiments contained an OPT101 photodiode and a front camera of a smartphone. The receiver was mounted on a mobile robot at a height of 1.25 m.

figure 10

Photodiode/Camera integrated navigation (modified on the results reported in Zhuang et al. ( 2019 ))

Field test results showed that the proposed system provided a semi-real-time positioning solution with an average 3D positioning accuracy of 15.6 cm in dynamic tests. The accuracy is expected to be further improved when more sensors are used.

Future trends

This section summarizes the future trends for indoor PLAN, including the improvement of sensors, the use of multi-platform, multi-device, and multi-sensor information fusion, the development of self-learning algorithms and systems, the integration with 5G/ IoT/ edge computing, and the use of HD maps for indoor PLAN.

Improvement of sensors

Table 12 illustrates the future trends of sensors in terms of PLAN. Sensors such as LiDAR, RADAR, inertial sensors, GNSS, and UWB are being developed in the direction of low-cost and small-sized to facilitate their commercialization. For HD maps, reducing maintenance costs and increasing update frequency is key. The camera may further increase its physical performance such as self-cleaning, larger dynamic range, stronger low-light sensitivity, and stronger near-infrared sensitivity.

It is expected that the introduction of new wireless infrastructure features (e.g., 5G, LPWAN, WiFi HALow, WiFi RTT, Bluetooth long range, and Bluetooth direction finding) and new sensors (e.g., UWB, LiDAR, depth camera, and high-precision GNSS) in consumer devices will bring in new directions and opportunities for the PLAN society.

Multi-platform, multi-device, and multi-sensor information fusion

The PLAN system will develop towards the integration of multiple platforms, multiple devices, and multiple sensors. Figure  11 shows a schematic diagram of the multiple-platform integrated PLAN.

figure 11

Schematic diagram of multiple-platform integrated PLAN

With the development of low-cost miniaturized satellites and Low Earth Orbit (LEO) satellite technologies, using LEO satellites to provide space-based navigation signal has become feasible. The research paper (Cluzel et al. 2018 ) uses LEO satellites to enhance the coverage of IoT signals. Also, the paper (Wang et al. 2018 ) analyzes the navigation signals from LEO satellites. In addition to the space-borne platform, there are airborne and underground PLAN platforms. For example, the research paper (Sallouha et al. 2018 ) uses unmanned aerial vehicles as base stations to enhance PLAN.

Collaborative PLAN is also a future direction. The research in (Zhang et al. 2017a ) has reviewed 5G cooperative localization techniques and pointed out that cooperative localization can be an important feature for 5G networks. In the coming years, the characteristics of massive devices, dense base stations, and device-to-device communication may make accurate cooperative localization possible. In addition to multiple devices, there may be multiple devices (e.g., smartphones, smartwatches, and IoT devices) on the same human body or vehicle. The information from such devices can also be used to enhance PLAN.

Self-learning algorithms and systems

Artificial intelligence.

With the popularization of IoT and location-based services, more complex and new PLAN scenarios will appear. If this is the case, self-learning PLAN algorithms and systems are needed. There are already research works that use artificial intelligence techniques in various PLAN modules, such as initialization, the switch of sensor integration mode, and the tuning of parameters. The research paper (Chen et al. 2020 ) uses ANN to generate PLAN solution directly from inertial sensor data, while the research work (Li et al. 2019c ) uses DRL to perform wireless positioning from another perspective. In the future, there will be a massive amount of data, which meets the requirement of artificial intelligence. Meanwhile, with the further development of artificial intelligence algorithms, computing power, and communication capabilities, the integration between PLAN and artificial intelligence will become tighter.

Data crowdsourcing (e.g., co-location)

The data from numerous consumer electronics and sensor networks will make crowdsourcing (e.g., co-location) a reality. As mentioned in the HD map subsection, the crowdsourcing technique may fundamentally change the mode of map and HD map generation. Furthermore, using crowdsourced data can enhance PLAN performance. For example, the crowdsourced data contains more comprehensive information than an ego-only car in teams of map availability and sensing range. On the other hand, as pointed out in (Li et al. 2019e ), how to select the most valuable data from the crowdsourced big data to update the database is still a challenge. It is difficult to evaluate the reliability of data automatically by the software in the absence of manual intervention and lack of evaluation reference.

Integration with 5G, IoT, and edge/fog computing

As described in the 5G subsection, the development of 5G and IoT technologies are changing PLAN. The new features (e.g., dense miniaturized base stations, mm-wave MIMO, and device-to-device communication) can directly enhance PLAN. Also, the combination of 5G/IoT and edge/fog computing will bring new PLAN opportunities. Edge/fog computing allows data processing as close to the source as possible, enables PLAN data processing with faster speed, reduces latency, and gives overall better outcomes. The review papers (Oteafy and Hassanein 2018 ) and (Shi et al. 2016 ) provide detailed overviews of edge computing and fog computing, respectively. Such techniques may be able to change the existing operation mode on HD maps and for PLAN. It may become possible to online repair or optimize HD maps by using SLAM and artificial intelligence technologies.

HD maps for indoor navigation

HD maps will be extended from outdoors to indoors. The cooperation among the manufacturers of cars, maps, 5G, and consumer devices have already shown its importance (Abuelsamid 2017 ). The high accuracy and rich information of the HD map make it a valuable indoor PLAN sensor and even a platform that links people, vehicles, and the environment. Indoor and outdoor PLAN may need different HD map elements. Therefore, different HD maps may be developed according to different scenarios. Similar to outdoors, the standardization of indoor HD maps will be important but challenging.

This article first reviews the market value, including the social benefits and economic values, of indoor navigation, followed by the classification from the marker perspective and the main players. Then, it compares the state-of-the-art sensors, including navigation sensors and environmental-perception (as aiding sensors for navigation), and techniques, including position-fixing, dead-reckoning, database matching, multi-sensor fusion, and motion constraints. Finally, it points out several future trends, including the improvement of sensors, the use of multi-platform, multi-device, and multi-sensor information fusion, the development of self-learning algorithms and systems, the integration with 5G/IoT/edge computing, and the use of HD maps for indoor PLAN.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed in this review article.

Abdulrahim, K., Hide, C., Moore, T., & Hill, C. (2010). Aiding MEMS IMU with building heading for indoor pedestrian navigation. In 2010 ubiquitous positioning indoor navigation and location based service. Helsinki: IEEE.

Abdulrahim, K., Hide, C., Moore, T., & Hill, C. (2012). Using constraints for shoe mounted indoor pedestrian navigation. Journal of Navigation, 65 (1), 15–28.

Article   Google Scholar  

Abuelsamid, S. (2017). BMW, HERE and mobileye team up to crowd-source HD maps for self-driving. https://www.forbes.com/sites/samabuelsamid/2017/02/21/bmw-here-and-mobileye-team-up-to-crowd-source-hd-maps-for-self-driving/#6f04e0577cb3 . Accessed April 28, 2020.

Agency, E. G. (2019). Report on road user needs and requirements. https://www.gsc-europa.eu/sites/default/files/sites/all/files/Report_on_User_Needs_and_Requirements_Road.pdf . Accessed April 28, 2020.

Alvarez, D., González, R. C., López, A., & Alvarez, J. C. (2006). Comparison of step length estimators from weareable accelerometer devices. Annual international conference of the IEEE engineering in medicine and biology (pp. 5964–5967). IEEE: New York.

Google Scholar  

Andrews, J. G., Buzzi, S., Choi, W., Hanly, S. V., Lozano, A., Soong, A. C. K., & Zhang, J. C. (2014). What will 5G be? IEEE Journal on Selected Areas in Communications, 32 (6), 1065–1082.

Badawy, A., Khattab, T., Trinchero, D., Fouly, T. E., & Mohamed, A. (2014). A simple AoA estimation scheme. arXiv:1409.5744.

Bai, L., Peng, C. Y., & Biswas, S. (2008). Association of DOA estimation from two ULAs. IEEE Transactions on Instrumentation and Measurement, 57 (6), 1094–1101. https://doi.org/10.1109/TIM.2007.915122 .

Basnayake, C., Williams, T., Alves, P., & Lachapelle, G. J. G. W. (2010). Can GNSS Drive V2X? GPS World, 21 (10), 35–43.

Biber, P., & Straßer, W. (2003). The normal distributions transform: A new approach to laser scan matching. Proceedings 2003 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2743–2748). IEEE: Las Vegas, NV.

Bluetooth. (2017). Exploring Bluetooth 5—going the distance. https://www.bluetooth.com/blog/exploring-bluetooth-5-going-the-distance/ . Accessed April 28, 2020.

Bluetooth. (2019). Bluetooth 5.1 Direction finding. https://www.bluetooth.com/wp-content/uploads/2019/05/BTAsia/1145-NORDIC-Bluetooth-Asia-2019Bluetooth-5.1-Direction-Finding-Theory-and-Practice-v0.pdf . Accessed April 28, 2020.

Brossard, M., Barrau, A., & Bonnabel, S. (2020). AI-IMU dead-reckoning. IEEE Transactions on Intelligent Vehicles, 5 (4), 585–595. https://doi.org/10.1109/TIV.2020.2980758 .

Brunker, A., Wohlgemuth, T., Frey, M., & Gauterin, F. (2018). Odometry 2.0: A slip-adaptive EIF-based four-wheel-odometry model for parking. IEEE Transactions on Intelligent Vehicles, 4 (1), 114–126.

Bshara, M., Orguner, U., Gustafsson, F., & Van Biesen, L. (2011). Robust tracking in cellular networks using HMM filters and cell-ID measurements. IEEE Transactions on Vehicular Technology, 60 (3), 1016–1024.

Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., et al. (2016). Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics, 32 (6), 1309–1332.

Chen, C., Zhao, P., Lu, C. X., Wang, W., Markham, A., & Trigoni, A. (2020). Deep-learning-based pedestrian inertial navigation: Methods, data set, and on-device inference. IEEE Internet of Things Journal, 7 (5), 4431–4441.

Cheng, Y. C., Chawathe, Y., Lamarca, A., & Krumm, J. (2005). Accuracy characterization for metropolitan-scale Wi-Fi localization. In Proceedings of the 3rd international conference on mobile systems , applications , and services , MobiSys 2005 (pp. 233–245). Seattle, WA: IEEE.

Chetverikov, D., Svirko, D., Stepanov, D., & Krsek, P. (2002). The trimmed iterative closest point algorithm. Object recognition supported by user interaction for service robots (pp. 545–548). IEEE: Quebec City, QC.

Chapter   Google Scholar  

Ciurana, M., Barcelo-Arroyo, F., & Izquierdo, F. (2007). A ranging system with IEEE 802.11 data frames. In 2007 IEEE radio and wireless symposium (pp. 133–136). Long Beach, CA: IEEE.

Cluzel, S., Franck, L., Radzik, J., Cazalens, S., Dervin, M., Baudoin, C., & Dragomirescu, D. (2018). 3GPP NB-IOT coverage extension using LEO satellites. IEEE Vehicular Technology Conference (pp. 1–5). IEEE: Porto.

de Paula Veronese, L., Guivant, J., Cheein, F. A. A., Oliveira-Santos, T., Mutz, F., de Aguiar, E., et al. (2016). A light-weight yet accurate localization system for autonomous cars in large-scale and complex environments. 2016 IEEE 19th international conference on intelligent transportation systems (ITSC) (pp. 520–525). IEEE: Rio de Janeiro.

Decawave. (2020). DWM1000 Module. https://www.decawave.com/product/dwm1000-module/ . Accessed April 28, 2020.

del Peral-Rosado, J. A., Raulefs, R., López-Salcedo, J. A., & Seco-Granados, G. (2017). Survey of cellular mobile radio localization methods: From 1G to 5G. IEEE Communications Surveys and Tutorials, 20 (2), 1124–1148.

Dodge, D. (2013). Indoor Location startups innovating Indoor Positioning. https://dondodge.typepad.com/the_next_big_thing/2013/06/indoor-location-startups-innovating-indoor-positioning.html . Accessed April 28, 2020.

El-Sheimy, N., & Niu, X. (2007a). The promise of MEMS to the navigation community. Inside GNSS, 2 (2), 46–56.

El-Sheimy, N., & Niu, X. (2007b). The promise of MEMS to the navigation community. Inside GNSS, 2 (2), 26–56.

El-Sheimy, N., Hou, H., & Niu, X. (2007). Analysis and modeling of inertial sensors using Allan variance. IEEE Transactions on Instrumentation and Measurement, 57 (1), 140–149.

El-Sheimy, N., & Youssef, A. (2020). Inertial sensors technologies for navigation applications: State of the art and future trends. Satellite Navigation, 1 (1), 2.

FCC. (2015). FCC 15–9. https://ecfsapi.fcc.gov/file/60001025925.pdf . Accessed 28 April 2020.

Foxlin, E. (2005). Pedestrian tracking with shoe-mounted inertial sensors. IEEE Computer Graphics and Applications, 25 (6), 38–46.

Gao, Z., Ge, M., Li, Y., Pan, Y., Chen, Q., & Zhang, H. (2020). Modeling of multi-sensor tightly aided BDS triple-frequency precise point positioning and initial assessments. Information Fusion, 55, 184–198.

Gebre-Egziabher, D., Elkaim, G. H., David Powell, J., & Parkinson, B. W. (2006). Calibration of strapdown magnetometers in magnetic field domain. Journal of Aerospace Engineering, 19 (2), 87–102.

Glennie, C., & Lichti, D. D. J. R. S. (2010). Static calibration and analysis of the Velodyne HDL-64E S2 for high accuracy mobile scanning. Remote Sensing, 2 (6), 1610–1624.

Godha, S., & Cannon, M. E. (2007). GPS/MEMS INS integrated system for navigation in urban areas. GPS Solutions, 11 (3), 193–203.

Goldstein. (2019). Global Indoor Positioning and Indoor Navigation (IPIN) Market Outlook, 2024. https://www.goldsteinresearch.com/report/global-indoor-positioning-and-indoor-navigation-ipin-market-outlook-2024-global-opportunity-and-demand-analysis-market-forecast-2016-2024 . Accessed April 28, 2020.

Gruyer, D., Belaroussi, R., & Revilloud, M. (2016). Accurate lateral positioning from map data and road marking detection. Expert Systems with Applications, 43, 1–8.

Guo, X., Ansari, N., Li, L., & Li, H. (2018). Indoor localization by fusing a group of fingerprints based on random forests. IEEE Internet of Things Journal, 5 (6), 4686–4698.

Guvenc, I., & Chong, C. C. (2009). A survey on TOA based wireless localization and NLOS mitigation techniques. IEEE Communications Surveys and Tutorials, 11 (3), 107–124.

Haeberlen, A., Flannery, E., Ladd, A. M., Rudys, A., Wallach, D. S., & Kavraki, L.E. (2004). Practical robust localization over large-scale 802.11 wireless networks. In Proceedings of the 10th annual international conference on Mobile computing and networking (pp. 70–84). Philadelphia, PA: IEEE.

Hähnel, B. F. D., & Fox, D. (2006). Gaussian processes for signal strength-based location estimation. In Proceeding of robotics: Science and systems. Philadelphia, PA: IEEE.

Halperin, D., Hu, W., Sheth, A., & Wetherall, D. (2011). Tool release: Gathering 802.11 n traces with channel state information. ACM SIGCOMM Computer Communication Review, 41 (1), 53–53.

He, S., Chan, S. H. G., Yu, L., & Liu, N. (2018). SLAC: Calibration-free pedometer-fingerprint fusion for indoor localization. IEEE Transactions on Mobile Computing, 17 (5), 1176–1189.

Ibisch, A., Stümper, S., Altinger, H., Neuhausen, M., Tschentscher, M., Schlipsing, M., Salinen, J., & Knoll, A. (2013). Towards autonomous driving in a parking garage: Vehicle localization and tracking using environment-embedded lidar sensors. In 2013 IEEE intelligent vehicles symposium (IV) (pp. 829–834). Gold Coast: IEEE.

Ibisch, A., Houben, S., Michael, M., Kesten, R., & Schuller, F. (2015). Arbitrary object localization and tracking via multiple-camera surveillance system embedded in a parking garage. In Video surveillance and transportation imaging applications 2015 (pp. 94070G). San Francisco, CA: International Society for Optics and Photonics.

IEEE. (2020). IEEE 802.11TM Wireless Local Area Network. http://www.ieee802.org/11/ . Accessed 28 April 2020.

Kaune, R., Hörst, J., & Koch, W. (2011). Accuracy analysis for TDOA localization in sensor networks. 14th international conference on information fusion (pp. 1–8). IEEE: Chicago, Illinois, USA.

Kim, K. J., Agrawal, V., Gaunaurd, I., Gailey, R. S., & Bennett, C. L. (2016). Missing sample recovery for wireless inertial sensor-based human movement acquisition. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 24 (11), 1191–1198. https://doi.org/10.1109/TNSRE.2016.2532121 .

Kodippili, N. S., & Dias, D. (2010). Integration of fingerprinting and trilateration techniques for improved indoor localization. In 2010 7th international conference on wireless and optical communications networks . Colombo: IEEE.

Kok, M., & Solin, A. (2018). Scalable magnetic field SLAM in 3D using Gaussian process maps. 2018 21st international conference on information fusion (FUSION) (pp. 1353–1360). IEEE: Cambridge.

Langley, R. B. (1999). Dilution of precision. GPS World, 1 (1), 1–5.

Leugner, S., Pelka, M., & Hellbrück, H. (2016). Comparison of wired and wireless synchronization with clock drift compensation suited for U-TDoA localization. 2016 13th workshop on positioning, navigation and communications (WPNC) (pp. 1–4). IEEE: Bremen.

Levinson, J., Montemerlo, M., & Thrun, S. (2007). Map-based precision vehicle localization in urban environments. In Robotics: Science and systems (pp. 1). Atlanta, GA: IEEE.

Levinson, J., & Thrun, S. (2010). Robust vehicle localization in urban environments using probabilistic maps. 2010 IEEE international conference on robotics and automation (pp. 4372–4378). IEEE: Anchorage, AK.

Li, X. (2006). RSS-based location estimation with unknown pathloss model. IEEE Transactions on Wireless Communications, 5 (12), 3626–3633. https://doi.org/10.1109/TWC.2006.256985 .

Li, Y., Georgy, J., Niu, X., Li, Q., & El-Sheimy, N. (2015). Autonomous calibration of MEMS gyros in consumer portable devices. IEEE Sensors Journal, 15 (7), 4062–4072.

Li, Y., Zhuang, Y., Zhang, P., Lan, H., Niu, X., & El-Sheimy, N. (2017). An improved inertial/wifi/magnetic fusion structure for indoor navigation. Information Fusion, 34, 101–119.

Li, Y., Gao, Z., He, Z., Zhang, P., Chen, R., & El-Sheimy, N. (2018). Multi-sensor multi-floor 3D localization with robust floor detection. IEEE Access, 6, 76689–76699.

Li, Y., Zahran, S., Zhuang, Y., Gao, Z. Z., Luo, Y. R., He, Z., et al. (2019a). IMU/magnetometer/barometer/mass-flow sensor integrated indoor quadrotor UAV localization with robust velocity updates. Remote Sensing, 11 (7), 838. https://doi.org/10.3390/rs11070838 .

Li, Y., Gao, Z. Z., He, Z., Zhuang, Y., Radi, A., Chen, R. Z., & El-Sheimy, N. (2019b). Wireless fingerprinting uncertainty prediction based on machine learning. Sensors, 19 (2), 324.

Li, Y., Hu, X., Zhuang, Y., Gao, Z., Zhang, P., & El-Sheimy, N. (2019c). Deep Reinforcement Learning (DRL): another perspective for unsupervised wireless localization. IEEE Internet of Things Journal .

Li, Y., He, Z., Zhuang, Y., Gao, Z. Z., Tsai, G. J., & Pei, L. (2019d). Robust localization through integration of crowdsourcing and machine learning. In Presented at the International conference on mobile mapping technology . Shenzhen, China.

Li, Y., He, Z., Gao, Z., Zhuang, Y., Shi, C., & El-Sheimy, N. (2019). Toward robust crowdsourcing-based localization: A fingerprinting accuracy indicator enhanced wireless/magnetic/inertial integration approach. IEEE Internet of Things Journal, 6 (2), 3585–3600.

Li, Y., Zhuang, Y., Hu, X., Gao, Z. Z., Hu, J., Chen, L., He, Z., Pei, L., Chen, K. J., Wang, M. S., Niu, X. J., Chen, R. Z., Thompson, J., Ghannouchi, F., & El-Sheimy, N . (2020a). Location-Enabled IoT (LE-IoT): A survey of positioning techniques, error sources, and mitigation. IEEE Internet of Things Journal .

Li, Y., Yan, K. L., He, Z., Li, Y. Q., Gao, Z. Z., Pei, L., et al. (2020). Cost-effective localization using RSS from single wireless access point. IEEE Transactions on Instrumentation and Measurement, 69 (5), 1860–1870. https://doi.org/10.1109/TIM.2019.2922752 .

Lim, H., Kung, L. C., Hou, J. C., & Luo, H. (2006). Zero-configuration, robust indoor localization: Theory and experimentation. In Proceedings IEEE INFOCOM 2006. 25TH IEEE international conference on computer communications . Barcelona: IEEE.

Lin, Y., Gao, F., Qin, T., Gao, W. L., Liu, T. B., Wu, W., et al. (2018). Autonomous aerial navigation using monocular visual-inertial fusion. Journal of Field Robotics, 35 (1), 23–51.

Liu, R., Wang, J., & Zhang, B. (2020). High definition map for automated driving: Overview and analysis. The Journal of Navigation, 73 (2), 324–341.

MachineDesign. (2020). 5G’s Important Role in Autonomous Car Technology. https://www.machinedesign.com/mechanical-motion-systems/article/21837614/5gs-important-role-in-autonomous-car-technology . Accessed April 28, 2020.

Marvelmind. (2020). Indoor Navigation System Operating manual. https://marvelmind.com/pics/marvelmind_navigation_system_manual.pdf . Accessed April 28, 2020.

Maybeck, P. S. (1982). Stochastic models, estimation, and control . London: Academic Press.

MATH   Google Scholar  

McManus, C., Churchill, W., Napier, A., Davis, B., & Newman, P. (2013). Distraction suppression for vision-based pose estimation at city scales. 2013 IEEE international conference on robotics and automation (pp. 3762–3769). IEEE: Karlsruhe.

Mur-Artal, R., & Tardós, J. D. (2017). Orb-slam2: An open-source slam system for monocular, stereo, and RGB-d cameras. IEEE Transactions on Robotics, 33 (5), 1255–1262.

NHTSA. (2017). Federal motor vehicle safety standards; V2V communications. Federal Register, 82 (8), 3854–4019.

Niu, X., Zhang, H., Chiang, K. W., & El-Sheimy, N. (2010). Using land-vehicle steering constraint to improve the heading estimation of mems GPS/ins georeferencing systems. ISPRS - International Archives of the Photogrammetry, Remote Sensing Spatial Information Sciences, 38 (1), 1–5.

Niu, X., Li, Y., Kuang, J., & Zhang, P. (2019). Data fusion of dual foot-mounted IMU for pedestrian navigation. IEEE Sensors Journal, 19 (12), 4577–4584.

NovAtel, H. (2020). IMU-FSAS. https://docs.novatel.com/OEM7/Content/Technical_Specs_IMU/FSAS_Overview.htm . Accessed 28 April 2020.

Nvidia. (2020). DRIVE Labs: How Localization Helps Vehicles Find Their Way. https://news.developer.nvidia.com/drive-labs-how-localization-helps-vehicles-find-their-way/ . Accessed April 28, 2020.

Oteafy, S. M. A., & Hassanein, H. S. (2018). IoT in the fog: A roadmap for data-centric IoT development. IEEE Communications Magazine, 56 (3), 157–163.

Pei, L., Liu, D., Zou, D., Leefookchoy, R., Chen, Y., & He, Z. (2018). Optimal heading estimation based multidimensional particle filter for pedestrian indoor positioning. IEEE Access, 6, 49705–49720. https://doi.org/10.1109/ACCESS.2018.2868792 .

Petovello, M. (2003). Real-time integration of a tactical-grade IMU and GPS for high-accuracy positioning and navigation . Calgary: University of Calgary.

Pivato, P., Palopoli, L., & Petri, D. (2011). Accuracy of RSS-based centroid localization algorithms in an indoor environment. IEEE Transactions on Instrumentation and Measurement, 60 (10), 3451–3460.

Poggenhans, F., Salscheider, N. O., & Stiller, C. (2018). Precise localization in high-definition road maps for urban regions. 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2167–2174). IEEE: Madrid.

Quuppa. (2020). Product and Technology. http://quuppa.com/technology/ . Accessed 28 April 2020.

Radi, A., Bakalli, G., Guerrier, S., El-Sheimy, N., Sesay, A. B., & Molinari, R. (2019). A multisignal wavelet variance-based framework for inertial sensor stochastic error modeling. IEEE Transactions on Instrumentation and Measurement, 68 (12), 4924–4936.

Rantakokko, J., Händel, P., Fredholm, M., & Marsten-Eklöf, F. (2010). User requirements for localization and tracking technology: A survey of mission-specific needs and constraints. 2010 international conference on indoor positioning and indoor navigation (pp. 1–9). IEEE: Zurich.

Reid, T. G. R., Houts, S. E., Cammarata, R., Mills, G., Agarwal, S., Vora, A., & Pandey, G. (2019). Localization requirements for autonomous vehicles. arXiv:1906.01061.

Restrepo, J. (2020). World radio 5G roadmap: challenges and opportunities ahead. https://www.itu.int/en/ITU-R/seminars/rrs/RRS-17-Americas/Documents/Forum/1_ITU%20Joaquin%20Restrepo.pdf . Accessed April 28, 2020.

Rusu, R. B., Blodow, N., Marton, Z. C., & Beetz, M. (2008). Aligning point cloud views using persistent feature histograms. 2008 IEEE/RSJ international conference on intelligent robots and systems (pp. 3384–3391). IEEE: Nice.

SAE-International. (2016). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. https://www.sae.org/standards/content/j3016_201609/ . Accessed April 28, 2020.

Sallouha, H., Azari, M. M., Chiumento, A., & Pollin, S. (2018). Aerial anchors positioning for reliable RSS-based outdoor localization in urban environments. IEEE Wireless Communications Letters, 7 (3), 376–379.

Scaramuzza, D., & Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE Robotics and Automation Magazine, 18 (4), 80–92.

Schneider, O. (2010). Requirements for positioning and navigation in underground constructions. International conference on indoor positioning and indoor navigation (pp. 1–4). IEEE: Zurich.

Schönenberger. (2019). The automotive digital transformation and the economic impacts of existing data access model. https://www.fiaregion1.com/wp-content/uploads/2019/03/The-Automotive-Digital-Transformation_Full-study.pdf . Accessed April 28, 2020.

Seco, F., & Jiménez, A. R. (2017). Autocalibration of a wireless positioning network with a FastSLAM algorithm. 2017 international conference on indoor positioning and indoor navigation (pp. 1–8). IEEE: Sapporo.

Seif, H. G., & Hu, X. (2016). Autonomous driving in the iCity—HD maps as a key challenge of the automotive industry. Engineering, 2 (2), 159–162.

Shi, W., Cao, J., Zhang, Q., Li, Y., & Xu, L. (2016). Edge computing: Vision and challenges. IEEE Internet of Things Journal, 3 (5), 637–646.

Shin, E. H. (2005). Estimation techniques for low-cost inertial navigation . Calgary: University of Calgary.

Shin, S.H., Park, C.G., Kim, J.W., Hong, H.S., & Lee, J.M. (2007). Adaptive step length estimation algorithm using low-cost MEMS inertial sensors. In Proceedings of the 2007 IEEE sensors applications symposium . San Diego, CA: IEEE.

Singh, S. (2015). Critical reasons for crashes investigated in the national motor vehicle crash causation survey. https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812115 . Accessed April 28, 2020.

Stephenson, S. (2016). Automotive applications of high precision GNSS . Nottingham: University of Nottingham.

Synced. (2018). The Golden Age of HD Mapping for Autonomous Driving. https://medium.com/syncedreview/the-golden-age-of-hd-mapping-for-autonomous-driving-b2a2ec4c11d . Accessed April 28, 2020.

TDK-InvenSense. (2020). MPU-9250 Nine-Axis (Gyro + Accelerometer + Compass) MEMS MotionTracking™ Device. https://invensense.tdk.com/products/motion-tracking/9-axis/mpu-9250/ . Accessed April 28, 2020.

Tesla. (2020). Autopilot. https://www.tesla.com/autopilot . Accessed April 28, 2020.

Tiemann, J., Schweikowski, F., & Wietfeld, C. (2015). Design of an UWB indoor-positioning system for UAV navigation in GNSS-denied environments. 2015 international conference on indoor positioning and indoor navigation (IPIN) (pp. 1–7). IEEE: Calgary.

Titterton, D., Weston, J.L., & Weston, J. (2004). Strapdown inertial navigation technology . IET.

TomTom. (2020). Extending the vision of automated vehicles with HD Maps and ADASIS. http://download.tomtom.com/open/banners/Elektrobit_TomTom_whitepaper.pdf . Accessed April 28, 2020.

Trimble. (2020). Trimble RTX. https://positioningservices.trimble.com/services/rtx/?gclid=CjwKCAjwnIr1BRAWEiwA6GpwNY78s-u6pUzELeIu_elfoumO63LmR2QHf72Q9pM-L-NXyJjomWCX6BoCE5YQAvD_BwE . Accessed April 28, 2020.

Vasisht, D., Kumar, S., & Katabi, D. (2016). Decimeter-level localization with a single WiFi access point. 13th USENIX symposium on networked systems design and implementation (pp. 165–178). USENIX Association: Santa Clara.

Velodyne. (2020). HDL-64E High Definition Real-Time 3D Lidar. https://velodynelidar.com/products/hdl-64e/ . Accessed April 28, 2020.

Wang, L., Chen, R. Z., Li, D. R., Zhang, G., Shen, X., Yu, B. G., et al. (2018). Initial assessment of the LEO based navigation signal augmentation system from Luojia-1A satellite. Sensors (Switzerland), 18 (11), 3919.

Wang, Y., & Ho, K. J. I. T. O. W. C. (2015). An asymptotically efficient estimator in closed-form for 3-D AOA localization using a sensor network. IEEE Transactions on Wireless Communications, 14 (12), 6524–6535.

Wang, Y. T., Li, J., Zheng, R., & Zhao, D. (2017). ARABIS: An Asynchronous acoustic indoor positioning system for mobile devices. 2017 international conference on indoor positioning and indoor navigation (pp. 1–8). IEEE: Sapporo.

WiFi-Alliance. (2020). Wi-Fi HaLow low power, long range Wi-Fi. https://www.wi-fi.org/discover-wi-fi/wi-fi-halow . Accessed April 28, 2020.

Will, H., Hillebrandt, T., Yuan, Y., Yubin, Z., & Kyas, M. (2012). The membership degree min-max localization algorithm. 2012 ubiquitous positioning, indoor navigation, and location based service (UPINLBS) (pp. 1–10). IEEE: Helsinki.

Witrisal, K., Meissner, P., Leitinger, E., Shen, Y., Gustafson, C., Tufvesson, F., et al. (2016). High-accuracy localization for assisted living: 5G systems will turn multipath channels from foe to friend. IEEE Signal Processing Magazine, 33 (2), 59–70.

Wolcott, R. W., & Eustice, R. M. (2014). Visual localization within lidar maps for automated urban driving. 2014 IEEE/RSJ international conference on intelligent robots and systems (pp. 176–183). IEEE: Chicago, IL.

Wolcott, R. W., & Eustice, R. M. (2017). Robust LIDAR localization using multiresolution Gaussian mixture maps for autonomous driving. The International Journal of Robotics Research, 36 (3), 292–319.

Zhang, J., Han, G., Sun, N., & Shu, L. (2017). Path-loss-based fingerprint localization approach for location-based services in indoor environments. IEEE Access, 5, 13756–13769.

Zhang, P., Lu, J., Wang, Y., & Wang, Q. (2017). Cooperative localization in 5G networks: A survey. ICT Express, 3 (1), 27–32.

Zhou, B., Li, Q., Mao, Q., Tu, W., & Zhang, X. (2015). Activity sequence-based indoor pedestrian localization using smartphones. IEEE Transactions on Human-Machine Systems, 45 (5), 562–574.

Zhuang, Y., Lan, H., Li, Y., & El-Sheimy, N. (2015). PDR/INS/WiFi integration based on handheld devices for indoor pedestrian navigation. Micromachines, 6 (6), 793–812.

Zhuang, Y., Yang, J., Li, Y., Qi, L., & El-Sheimy, N. (2016). Smartphone-based indoor localization with bluetooth low energy beacons. Sensors, 16 (5), 596.

Zhuang, Y., Wang, Q., Li, Y., Gao, Z. Z., Zhou, B. P., Qi, L. N., et al. (2019). The integration of photodiode and camera for visible light positioning by using fixed-lag ensemble Kalman smoother. Remote Sensing, 11 (11), 1387.

Download references

This work was supported by Canada Research Chairs programs (Grant No. RT691875).

Author information

Authors and affiliations.

Department of Geomatics Engineering, University of Calgary, 2500 University Drive N.W, Calgary, AB, T2N 1N4, Canada

Naser El-Sheimy & You Li

You can also search for this author in PubMed   Google Scholar

Contributions

NE devised the article structure and general contents and structure and writing parts of the manuscript. YL assisted in summarizing and writing the manuscript. Both authors have read and approved the final manuscript.

Authors' information

Naser El-Sheimy is a Professor at the Department of Geomatics Engineering, the University of Calgary. He is a Fellow of the Canadian Academy of Engineering and the US Institute of Navigation and a Tier-I Canada Research Chair in Geomatics Multi-sensor Systems. His research expertise includes Geomatics multi-sensor systems, GPS/INS integration, and mobile mapping systems. He is also the founder and CEO of Profound Positioning Inc. He published two books, 6 book chapters, and over 450 papers in academic journals, conference and workshop proceedings, in which he has received over 30 paper awards. He supervised and graduated over 60 Masters and Ph.D. students. He is the recipient of many national and international awards including the ASTech “Leadership in Alberta Technology” Award, and the Association of Professional Engineers, Geologists, and Geophysicists of Alberta (APEGGA) Educational Excellence Award.

You Li is a Senior Researcher at the University of Calgary. He received Ph.D. degrees from both Wuhan University and the University of Calgary in 2016 and was selected for the national young talented project in 2020. His research focuses on ubiquitous internet-of-things localization. He has hosted/participated in four national research projects, and co-published over 70 academic papers, and has over 20 patents pending. He serves as an Associate Editor for the IEEE Sensors Journal, a committee member at the IAG unmanned navigation system and ISPRS mobile mapping working groups. He has won four best paper awards and a winner in the EvAAL international indoor localization competition.

Corresponding author

Correspondence to You Li .

Ethics declarations

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

El-Sheimy, N., Li, Y. Indoor navigation: state of the art and future trends. Satell Navig 2 , 7 (2021). https://doi.org/10.1186/s43020-021-00041-3

Download citation

Received : 03 November 2020

Accepted : 09 February 2021

Published : 03 May 2021

DOI : https://doi.org/10.1186/s43020-021-00041-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Indoor positioning
  • Information fusion
  • Wireless localization
  • Dead reckoning
  • Database matching

research paper on indoor positioning system

OPINION article

Indoor positioning systems: a blessing for seamless object identification, monitoring, and tracking.

\nShilpa Shyam

  • Karunya Institute of Technology and Sciences, Department of Computer Science and Engineering, Coimbatore, India

Introduction

Technology is the greatest result of supreme human imagination. The continual proliferation of indoor positioning technology is laid out in this paper along with its advantages and challenges. The authors point out that there are more to be grasped and utilized in this powerful and growing domain.

The Prominence of Indoor Positioning Systems in the Past, Present, and Years to Come

The global navigation satellite system (GNSS) performs exceedingly well in finding accurate location data anywhere on the planet. It is most sought after for its high accuracy and global coverage. The efficiency of GNSS is only dominant outdoors due to heavy signal multipath and signal attenuation. However, it fails to meet expectations for indoor environments, which is why several indoor localization technologies have popped up. Indoor navigation systems can be wearables, wall mounted devices, or an intelligent model able to calculate the precise location of objects or humans in any sort of sophisticated indoor environment backed up with several obstacles. An indoor navigation system consists of three vital modules: 1. Indoor positioning system module to estimate the object position, 2. Navigation module which helps in routing the object from the current destination, and 3. Object interaction module which helps in providing instructions to the model or system ( 1 ). The three module system results in better localization and navigation (modeling, surveying, and mapping of infrastructures) of location-based assets or object tracking, especially in emergency services for disaster management. With the daily invention of new applications, this industry is expected to have a market value of about 24 billion dollars by the year 2023. The aviation industry makes use of this system in helping passengers navigate to lounges, track passenger baggage, and perform other airport related security services. The advertising industry utilizes location-based promotions for the E-commerce sector. The healthcare sector implements location-based services for tracking patient records and whereabouts within the hospital arena. Asset or object-based tracking using the three module system is an inevitable part of the logistics industry. Through this positioning technology, customers can be traced and helped in navigating toward various services available in a railway station, bus stands, etc., benefiting the transportation industry. Indoor positioning technology has also seen a surge in the tourism and automotive industries as easy navigation of tourists and their assets can be monitored along with vehicle identification. The next time one visits the Sydney airport, one could witness the use of apple maps in which navigating through each terminal is made easy using this technology. Indoor positioning technologies not only comply with commercial sector standards but are also made available for day-to-day home services and applications. A tango augmented reality-based indoor location technology has been developed by the technology giant, Google. It would provide detailed and precise location data of the user using their mobile device. Apple has reached far ahead with indoor positioning technologies. They have employed inbuilt ultrawide band (UWB) chips in premium IPhones to calculate the location of a user in real time.

Salient Techniques and Technologies in Indoor positioning System

The basic principle behind the indoor navigation and positioning system is to accurately measure the range and distance between two devices. This can be done in two basic methods. The first one is the measurement of the distance using received signal strength (RSS), in which the strength of the signal between the transmitter and receiver determines the location. Though the accuracy is found to be considerable, it is highly influenced and affected by multipath propagation. Conventional but superior technologies, including WiFi, Bluetooth, RFID, Dead Reckoning, Ultrasonic, and ZigBee, fall under this category.

Radio Frequency Identification (RFID) avails the use of radio waves for object detection. The RFID readers and tags undergo interchanging of frequencies during this process. An RFID-based tracking system was implemented for dynamic targets with <1 m localization accuracy which proved it to be a propitious feature for applications where tracking is needed ( 2 ). Peer-to-peer communication over shorter distances can be easily established using the most common Bluetooth technology. ZigBee is a sought-after technology when a low cost and low power system has to be implemented. This makes it suitable to be implemented it in smart homes where energy conversation is taken care of Tumlin ( 3 ). Dead Reckoning, unlike other technologies, contemplates velocity for measuring position. It determines the present location based on velocity and past position data. A smartphone-based pedestrian dead reckoning system evinced the need for further implementation in this arena by providing exceptional results in indoor positioning systems ( 4 ). In ultrasonic systems, the distance is computed using the time of arrival between the emitter and receiver. The coordinates of the emitter are assessed using multilateration to the fixed anchors. The second measurement involves the estimation of the time of flight from several devices. This method comparatively imparts centimeter accuracy and is used by the UWB technology. UWB utilizes both the time difference of arrival (TDOA) and time of arrival (TOA) for measurement purposes. It is also seen to play a significant role in the industrial revolution 4.0.

Several smart factories have emerged by inculcating UWB ( 5 ). Khan et al. ( 6 ) defend various wireless technologies, including Wi-Fi and LoRa, to be the most worthy of implementing indoor localization applications because of it being vigorous, affordable, and able to utilize a minimum amount of power.

Technologies and techniques in indoor positioning systems go hand in hand. Combinations of technologies or combinations of techniques are perceived to be infused for better accuracy in recent times. Techniques in indoor positioning can be separated into triangulation, proximity, fingerprinting, and vision analysis. The computation of asset location using geometrical features of triangles is known as triangulation. It is further used for computation in two ways: lateration and angulation. Lateration measures distance alone for positioning, unlike angulation which uses both angles and distances. Fingerprinting is conducted in two stages, the online stage (also known as the serving stage) and the offline stage (also known as the training stage) for precise object calculation. Vision analysis is carried out from images received from several points. When an object is detected with respect to a known position, it is known as proximity analysis and requires several fixed detectors for this purpose.

Selection of Relevant Techniques and Technology According to the Environment and Need of the Hour

There is a huge disparity found in the requirements of indoor positioning when compared to outdoor systems. The dissimilarity in requirements is due to the diverse layout in indoor environments as they have complicated and sophisticated pathways. Hence, the accuracy and coverage demand would vary accordingly. Indoor positioning systems built specifically for assisted living, monitoring patients at home, etc., have a requisite for accuracy within 1 m, whereas systems operated for urban and rural applications demand accuracy of about a few meters. Thus, keeping in mind the application type and its place of execution, a suitable technology has to be chosen. As there is a need for technology in all sectors, no indoor position system can be claimed as the ideal solution ( 7 ). Along with accuracy and coverage, maintenance and implementation cost, system size, and power consumption are essential metrics.

The design of an indoor positioning system commences in two stages. First, by determining the principle indoor positioning technology on which it would be based upon, and second, by determining the technique that would be infused along with it. Systems that are supposed to create smart homes, find objects that are misplaced, and track and monitor daily activities are usually implemented using ZigBee, WiFi, and fingerprinting ( 8 ). Bluetooth has been adapted for low cost and low power applications ( 9 ). Similarly, ZigBee devours minimal power and is inexpensive in most cases and is, thus, used for home applications. Applications that require huge coverage area and centimeter accuracy within larger areas, including industries and manufacturing sites, preferably implement UWB ( 10 ). They are capable of imparting huge amounts of data using minimal energy. Tracking the motion of a visually impaired person or the movement of humans within a small area can easily be implemented by pedestrian dead reckoning, which is an example of dead reckoning technology ( 11 ). Newborn systems based on indoor positioning have been seen to work when using aerial robots, mobile robots ( 12 ), and humanoid robots. In such cases, criteria such as battery efficiency and power consumption are vital. Based upon the technology that was chosen to be executed, the suitable technique would be integrated according to the environment and the need of the application. Table 1 puts forth a comparison of the various technologies used at present in terms of accuracy, range, power consumption, and noise tolerance.

www.frontiersin.org

Table 1 . Comparison of existing indoor localization technologies.

Prevailing Challenges in the Implementation of Indoor Localization Systems

Every technology under the indoor positioning system is considered supreme, but it brings inexorable challenges along with it. Once the basic technology and technique are made obvious, the challenges that come with it must be tackled without compromising the requirements of the system. The impediments in an indoor environment should be considered and precision in location data should be accurate ( 13 ). Contemporary research proves that UWB technology is largely used for industries and manufacturing sites where it could track and trace both static and dynamic objects at ease. Though its utility is large, the signals of UWB are easily hindered by the indoor obstacles, thereby making error mitigation a necessity ( 14 ). Conventional methods such as WiFi and Bluetooth are often considered less often as it offers low range. Ultrasound is often neglected when used for wide ranging locations and has frequency restrictions. Privacy and security are one category that is often abandoned while considering metrics in indoor positioning systems ( 15 ). These systems are customized to provide accurate locations of data to the user and its organization alone. The involvement of a third party in such systems is a threat to the user or the organization responsible. Hence, future research is expected to pay more attention to the privacy and the security content of the indoor positioning systems along with the security of the data of users.

The needs of humans and technology are swiftly changing. Such needs should acknowledged to bring about changes and revolutions over the course of time. Indoor positioning technology is one such domain that attends to the needs of humans in several ways. Hence, researchers are always on the lookout for new formulations in this arena. Every technology under this system is beneficial and addresses particular complications. It is up to the researcher or the industrialist to select the appropriate technology and technique according to the application needs. From healthcare to travel, indoor positioning technologies are universal and omnipresent. With the internet of things (IoT), intelligent systems and mobile computing are growing at a fast pace as the market of indoor positioning technology has been dramatically increasing. Despite its several advantages, it also comes with several challenges for researchers to improve on. Particularly, the metrics of indoor location systems are its premier challenges. In addition, accuracy, maintenance cost, coverage, scalability, and privacy are major challenges that need to be subdued by implementing efficient measures. Finally, special heed should to be given to the privacy and security of the indoor positioning systems for the personal privacy and security of users.

Author Contributions

SS conceived the concept and drafted the manuscript. SJ supervised the study and verified the manuscript. KE performed the review and editing. All authors contributed to the article and approved the submitted version.

This study was supported by the Department of Science and Technology-Natural Resource Database Management System (DST-NRDMS) [Grant No: NRDMS/UG/NetworkProject/e-13/2019 (C) P-4].

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1. Kunhoth J, Karkar A, Al-Maadeed S, Al-Ali A. Indoor positioning and wayfinding systems: a survey. Human-centric Comput Inform Sci. (2020) 10:1–41. doi: 10.1186/s13673-020-00222-0

CrossRef Full Text | Google Scholar

2. Li J, Feng G, Wei W, Luo C, Cheng L, Wang H, et al. PSOTrack: A RFID-based system for random moving objects tracking in unconstrained indoor environment. IEEE Intern Things J. (2018) 5:4632–41. doi: 10.1109/JIOT.2018.2795893

3. Tumlin S. From Industry to Home: Rapid Development of a ZigBee-Based Indoor Positioning System for Use in Private Residences (2020).

Google Scholar

4. Jeong S, Min J, Park Y. Indoor Positioning Using Deep-Learning-Based Pedestrian Dead Reckoning and Optical Camera Communication. IEEE Access. (2021) 9:133725–34. doi: 10.1109/ACCESS.2021.3115808

5. Lumme I. Indoor Localization in Smart Factory: Utilization of UWB technology in real-case scenario (2021).

6. Khan FU, Awais M, Rasheed MB, Masood B. A comparison of wireless standards in iot for indoor localization using loPy. IEEE Access. (2021) 9:65925–33. doi: 10.1109/ACCESS.2021.3076371

7. Pascacio P, Casteleyn S, Torres-Sospedra J, Lohan ES, Nurmi J. Collaborative indoor positioning systems: a systematic review. Sensors. (2021) 21:1002. doi: 10.3390/s21031002

PubMed Abstract | CrossRef Full Text | Google Scholar

8. Yang C. Design of smart home control system based on wireless voice sensor. J Sensors . (2021) 2021:26 doi: 10.1155/2021/8254478

9. Lu X, Yin Y, Zhao N, Wei H. Indoor positioning experiment based on phase ranging with bluetooth low energy (BLE). J Physics . 1971:012044. doi: 10.1088/1742-6596/1971/1/012044

10. Xianjia Y, Qingqing L, Queralta JP, Heikkonen J, Westerlund T. Applications of uwb networks and positioning to autonomous robots and industrial systems. in 2021 10th Mediterranean Conference on Embedded Computing (MECO) (New York, NY: IEEE) (2021). Available online at: https://arxiv.org/pdf/2103.13488.pdf (accessed March 28, 2021).

11. Reyes Leiva KM, Jaén-Vargas M, Codina B, Serrano Olmedo JJ. Inertial measurement unit sensors in assistive technologies for visually impaired people, a review. Sensors. (2021) 21:4767. doi: 10.3390/s21144767

12. Su M, Gao S. Design and Implementation of intelligent Home monitoring System based on mobile robot. J. Phy. (2021) 1846:12084. doi: 10.1088/1742-6596/1846/1/012084

13. Ashraf I, Hur S, Park Y. Smartphone sensor based indoor positioning: Current status, opportunities, and future challenges. Electronics. (2020) 9:891. doi: 10.3390/electronics9060891

14. Ridolfi M, Kaya A, Berkvens R, Weyn M, Joseph W, Poorter ED. Self-calibration and collaborative localization for uwb positioning systems: a survey and future research directions. ACM Computing Surveys (CSUR). (2021) 54:1–27. doi: 10.1145/3448303

15. Kim Geok T, Zar Aung K, Sandar Aung M, Thu Soe M, Abdaziz A, Pao Liew C, et al. Review of indoor positioning: Radio wave technology. Appl Sci. (2021) 11:279. doi: 10.3390/app11010279

Keywords: indoor positioning systems (IPS), technique and technology, challenges, features, industry 4.0

Citation: Shyam S, Juliet S and Ezra K (2022) Indoor Positioning Systems: A Blessing for Seamless Object Identification, Monitoring, and Tracking. Front. Public Health 10:804552. doi: 10.3389/fpubh.2022.804552

Received: 29 October 2021; Accepted: 17 January 2022; Published: 23 February 2022.

Reviewed by:

Copyright © 2022 Shyam, Juliet and Ezra. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Sujitha Juliet, sujitha@karunya.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

research paper on indoor positioning system

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The effectiveness of uwb-based indoor positioning systems for the navigation of visually impaired individuals.

research paper on indoor positioning system

1. Introduction

2. related work, 3. methodology, 3.1. uwb-ble beacons, 3.2. mobile app and api apple u1 interface.

  • Providing constant values during a specific test that are to be saved in the database, e.g., distance measured by tape or voltage supplied to the beacon during the test;
  • The ability to select one beacon from a drop-down list or more beacons, taking into account their location in relation to the phone;
  • Each beacon was displayed using its unique UUID and the current and updated distance value;
  • Saving to the database using the “Start Recording” switch button—recording all incoming measurements from the previously selected beacon until the button is disabled, or using the “Send Data to Server” button, which allows for a single measurement to be sent and saved in the database from a selected moment in time;
  • The “Export to CSV” button allows the user to download data from the database in CVS format;
  • An extension of the “Export to CSV” button is the “Select Export Date” button, which allows the user to select the starting date from the calendar view from which they would like to export data;
  • It is also possible to disconnect all beacons using the “Disconnect All” or select the “Disconnect” button located next to each of the found beacons—it stops the process of distance updating.

3.3. Measurement Methods

3.4. subject of research and system analysis.

  • A set of UWB beacons based on TWR technology;
  • A phone with a designed mobile application dedicated to iOS (an iPhone 13 with U1 chipset was used in this research) enabling distance measurement using UWB beacons. The mobile application enables entering additional information about the environment, such as the measured distance using a measuring tape, or turning on/off system options, e.g., CameraAssist;
  • The reference system used in this study was the JetRacer mobile platform equipped with LIDAR, in two main cases, static and dynamic;
  • A REST API server enabling communication between the mobile application, the JetRacer platform, and the MySQL database;
  • Two databases—CoreData (collecting data when the mobile platform was not involved in the measurements) and MySQL (collecting data during dynamic tests using the Jetracer platform); the databases collected distance measurements as well as environmental information.
  • The diagram of the measurement system used in this research is shown in Figure 1 .
  • In addition, the following elements were used to conduct the research:
  • GTEM (Gigahertz Transverse Electro Magnetic) cell for simulating LoS/NLoS conditions. GTEM is the chamber that simulates free-space conditions for electromagnetic wave propagation and is utilized to test the electromagnetic interference and immunity of electronic devices. Overall, GTEM cells are valuable tools in the field of electromagnetic compatibility, providing a convenient and efficient way to conduct testing in a controlled environment;
  • Various obstacles such as concrete walls, glass walls, and wooden doors;
  • A designed test environment, which consisted of rooms in a residential building, to best replicate the everyday situation; a snapshot of the test room using the SLAM (simultaneous localization and mapping) algorithm is presented in Figure 2 .

3.5. Experiment Plan

3.5.1. influence of phone position, 3.5.2. impact of cameraassist, 3.5.3. impact of los/nlos conditions.

  • Accuracy of the device;
  • Durability/reliability of the device;
  • Adaptability to a dynamic environment.

4.1. Phone Multi-Axis Position

4.2. different phone height, 4.3. various distance, 4.4. cameraassist option, 4.5. los/nlos conditions, 4.6. impact of obstacles, 4.7. lidar and uwb comparison, 5. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Forrest, S.L.; Mercado, C.L.; Engmann, C.M.; Stacey, A.W.; Hariharan, L.; Khan, S.; Cabrera, M.T. Does the Current Global Health Agenda Lack Vision? Glob. Health Sci. Pract. 2023 , 11 , e2200091. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Fisher, D.E.; Ward, M.M.; Hoffman, H.J.; Li, C.-M.; Cotch, M.F. Impact of Sensory Impairments on Functional Disability in Adults with Arthritis. Am. J. Prev. Med. 2016 , 50 , 454–462. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Peter, M.G.; Porada, D.K.; Regenbogen, C.; Olsson, M.J.; Lundström, J.N. Sensory loss enhances multisensory integration performance. Cortex 2019 , 120 , 116–130. [ Google Scholar ] [ CrossRef ]
  • Lu, C.-L.; Liu, Z.-Y.; Huang, J.-T.; Huang, C.-I.; Wang, B.-H.; Chen, Y.; Wu, N.-H.; Wang, H.-C.; Giarré, L.; Kuo, P.-Y. Assistive Navigation Using Deep Reinforcement Learning Guiding Robot With UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People. Front. Robot. AI 2021 , 8 , 654132. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Dakopoulos, D.; Bourbakis, N.G. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2010 , 40 , 25–35. [ Google Scholar ] [ CrossRef ]
  • Patel, I.; Kulkarni, M.; Mehendale, N. Review of sensor-driven assistive device technologies for enhancing navigation for the visually impaired. Multimed. Tools Appl. 2023 , 83 , 52171–52195. [ Google Scholar ] [ CrossRef ]
  • E Kruk, M.; Pate, M. The Lancet Global Health Commission on High Quality Health Systems 1 year on: Progress on a global imperative. Lancet Glob. Health 2020 , 8 , E30–E32. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • The Technology of Electronic Travel Aids—Electronic Travel AIDS: New Directions for Research—NCBI Bookshelf. Available online: https://www.ncbi.nlm.nih.gov/books/NBK218025/ (accessed on 26 June 2024).
  • Soleimanijavid, A.; Konstantzos, I.; Liu, X. Challenges and opportunities of occupant-centric building controls in real-world implementation: A critical review. Energy Build. 2024 , 308 , 113958. [ Google Scholar ] [ CrossRef ]
  • Messaoudi, M.D.; Menelas, B.-A.J.; Mcheick, H. Review of Navigation Assistive Tools and Technologies for the Visually Impaired. Sensors 2022 , 22 , 7888. [ Google Scholar ] [ CrossRef ]
  • Li, C.T.; Cheng, J.C.; Chen, K. Top 10 technologies for indoor positioning on construction sites. Autom. Constr. 2020 , 118 , 103309. [ Google Scholar ] [ CrossRef ]
  • Su, H.-K.; Liao, Z.-X.; Lin, C.-H.; Lin, T.-M. A hybrid indoor-position mechanism based on bluetooth and WiFi communications for smart mobile devices. In Proceedings of the 2015 International Symposium on Bioelectronics and Bioinformatics (ISBB), Beijing, China, 14–17 October 2015; pp. 188–191. [ Google Scholar ] [ CrossRef ]
  • Syazwani, C.J.N.; Wahab, N.H.A.; Sunar, N.; Ariffin, S.H.S.; Wong, K.Y.; Aun, Y. Indoor Positioning System: A Review. Int. J. Adv. Comput. Sci. Appl. 2022 , 13 , 477–490. [ Google Scholar ] [ CrossRef ]
  • Elsanhoury, M.; Siemuri, A.; Nieminen, J.; Välisuo, P.; Koljonen, J.; Kuusniemi, H.; Elmusrati, M.S. Emerging Wireless Technologies for Reliable Indoor Navigation in Industrial Environments. In Proceedings of the 36th International Technical Meeting of the Satellite Division of the Institute of Navigation (ION GNSS+ 2023), Denver, CO, USA, 11–15 September 2023; pp. 1706–1714. [ Google Scholar ] [ CrossRef ]
  • FiRA Annual Report 2022. Available online: https://www.firaconsortium.org/sites/default/files/2023-02/fira-annual-report-2022.pdf (accessed on 19 May 2024).
  • Smartphone High Accuracy Real-Time Location to Drive UWB-Enabled Devices to 1.3 Billion Shipments by 2026. Available online: https://www.abiresearch.com/press/smartphone-high-accuracy-real-time-location-drive-uwb-enabled-devices-13-billion-shipments-2026/ (accessed on 19 May 2024).
  • Unleashing the Potential of UWB: Regulatory Considerations. Available online: https://www.allaboutcircuits.com/uploads/articles/UWBWP.pdf (accessed on 18 May 2024).
  • Annual Report 2021. Available online: https://www.firaconsortium.org/sites/default/files/2022-02/FiRa-Annual-Report-2021.pdf (accessed on 18 May 2024).
  • Ultra-Wideband UWB: Omlox. Available online: https://omlox.com/omlox-explained/ultra-wideband-uwb (accessed on 18 May 2024).
  • Alhadhrami, S.; Alnafessah, A.; Al-Ammar, M.; Alarifi, A.; Al-Khalifa, H.; Alsaleh, M. UWB Indoor Tracking System for Visually Impaired People. In Proceedings of the MoMM 2015: The 13th International Conference on Advances in Mobile Computing and Multimedia, MoMM 2015—Proceedings, Brussels, Belgium, 11–13 December 2015. [ Google Scholar ] [ CrossRef ]
  • Rivai, M.; Hutabarat, D.; Nafis, Z.M.J. 2D mapping using omni-directional mobile robot equipped with LiDAR. Telkomnika Telecommun. Comput. Electron. Control. 2020 , 18 , 1467–1474. [ Google Scholar ] [ CrossRef ]
  • Deak, G.; Curran, K.; Condell, J. A survey of active and passive indoor localisation systems. Comput. Commun. 2012 , 35 , 1939–1954. [ Google Scholar ] [ CrossRef ]
  • Hayward, S.; van Lopik, K.; Hinde, C.; West, A. A Survey of Indoor Location Technologies, Techniques and Applications in Industry. Internet Things 2022 , 20 , 100608. [ Google Scholar ] [ CrossRef ]
  • Liu, Q.; Yin, Z.; Zhao, Y.; Wu, Z.; Wu, M. UWB LOS/NLOS identification in multiple indoor environments using deep learning methods. Phys. Commun. 2022 , 52 , 101695. [ Google Scholar ] [ CrossRef ]
  • Arai, T.; Yoshizawa, T.; Aoki, T.; Zempo, K.; Okada, Y. Evaluation of Indoor Positioning System based on Attachable Infrared Beacons in Metal Shelf Environment. In Proceedings of the 2019 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 11–13 January 2019. [ Google Scholar ] [ CrossRef ]
  • Xiao, Z.; Wen, H.; Markham, A.; Trigoni, N.; Blunsom, P.; Frolik, J. Non-Line-of-Sight Identification and Mitigation Using Received Signal Strength. IEEE Trans. Wirel. Commun. 2015 , 14 , 1689–1702. [ Google Scholar ] [ CrossRef ]
  • Vaz, R.; Freitas, D.; Coelho, A. Blind and visually impaired visitors’ experiences in museums: Increasing accessibility through assistive technologies. Int. J. Incl. Mus. 2020 , 13 , 57–80. [ Google Scholar ] [ CrossRef ]
  • Nakajima, M.; Haruyama, S. New indoor navigation system for visually impaired people using visible light communication. EURASIP J. Wirel. Commun. Netw. 2013 , 2013 , 37. [ Google Scholar ] [ CrossRef ]
  • Xue, L.; Zhang, Z.; Xu, L.; Gao, F.; Zhao, X.; Xun, X.; Zhao, B.; Kang, Z.; Liao, Q.; Zhang, Y. Information accessibility oriented self-powered and ripple-inspired fingertip interactors with auditory feedback. Nano Energy 2021 , 87 , 106117. [ Google Scholar ] [ CrossRef ]
  • Faisal, F.; Hasan, M.; Sabrin, S.; Hasan, Z.; Siddique, A.H. Voice Activated Portable Braille with Audio Feedback. In Proceedings of the 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), Dhaka, Bangladesh, 5–7 January 2021. [ Google Scholar ] [ CrossRef ]
  • Choi, J.; Gill, H.; Ou, S.; Lee, J. CCVoice: Voice to Text Conversion and Management Program Implementation of Google Cloud Speech API. KIISE Trans. Comput. Pract. 2019 , 25 , 191–197. [ Google Scholar ] [ CrossRef ]
  • Gabler, D. András Márton: Les pratiques funéraires en Pannonie de l’époque augustéenne à la fin du 3e siècle. Acta Archaeol. Acad. Sci. Hung. 2021 , 72 , 241–245. [ Google Scholar ] [ CrossRef ]
  • Santhosh, S.S.; Sasiprabha, T.; Jeberson, R. BLI—NAV embedded navigation system for blind people. In Proceedings of the 2010 Recent Advances in Space Technology Services and Climate Change (RSTSCC), Chennai, India, 13–15 November 2010. [ Google Scholar ] [ CrossRef ]
  • Dian, Z.; Kezhong, L.; Rui, M. A precise RFID indoor localization system with sensor network assistance. China Commun. 2015 , 12 , 13–22. [ Google Scholar ] [ CrossRef ]
  • Park, S.; Choi, I.-M.; Kim, S.-S.; Kim, S.-M. A portable mid-range localization system using infrared LEDs for visually impaired people. Infrared Phys. Technol. 2014 , 67 , 583–589. [ Google Scholar ] [ CrossRef ]
  • Chen, H.; Wang, K.; Yang, K. Improving realsense by fusing color stereo vision and infrared stereo vision for the visually impaired. In Proceedings of the 2018 International Conference on Information Science and System, Jeju, Republic of Korea, 27 April 2018. [ Google Scholar ] [ CrossRef ]
  • Rehrl, K.; Leitinger, S.; Bruntsch, S.; Mentz, H. Assisting orientation and guidance for multimodal travelers in situations of modal change. In Proceedings of the 2005 IEEE Intelligent Transportation Systems, Vienna, Austria, 16 September 2005. [ Google Scholar ] [ CrossRef ]
  • Mahalle, S. Ultrasonic Spectacles & Waist- Belt for Visually Impaired & Blind Person. IOSR J. Eng. 2014 , 4 , 46–49. [ Google Scholar ] [ CrossRef ]
  • dos Santos, A.D.P.; Medola, F.O.; Cinelli, M.J.; Ramirez, A.R.G.; Sandnes, F.E. Are electronic white canes better than traditional canes? A comparative study with blind and blindfolded participants. Univers. Access Inf. Soc. 2021 , 20 , 93–103. [ Google Scholar ] [ CrossRef ]
  • Higuchi, H.; Harada, A.; Iwahashi, T.; Usui, S.; Sawamoto, J.; Kanda, J.; Wakimoto, K.; Tanaka, S. Network-based nationwide RTK-GPS and indoor navigation intended for seamless location based services. In Proceedings of the National Technical Meeting, Institute of Navigation, San Diego, CA, USA, 26–28 January 2004. [ Google Scholar ]
  • Caffery, J.; Stuber, G. Overview of radiolocation in CDMA cellular systems. IEEE Commun. Mag. 1998 , 36 , 38–45. [ Google Scholar ] [ CrossRef ]
  • Guerrero, L.A.; Vasquez, F.; Ochoa, S.F. An indoor navigation system for the visually impaired. Sensors 2012 , 12 , 8236–8258. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Satani, N.; Patel, S.; Patel, S. AI Powered Glasses for Visually Impaired Person. Int. J. Recent Technol. Eng. 2020 , 9 , 316–321. [ Google Scholar ] [ CrossRef ]
  • Chen, H.-E.; Lin, Y.-Y.; Chen, C.-H.; Wang, I.-F. BlindNavi: A Navigation App for the Visually Impaired Smartphone User. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; Volume 18, pp. 19–24. [ Google Scholar ] [ CrossRef ]
  • Huang, S.; Ishikawa, M.; Yamakawa, Y. An Active assistant robotic system based on high-speed vision and haptic feedback for human-robot collaboration. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018. [ Google Scholar ] [ CrossRef ]
  • Mon, C.S.; Yap, K.M.; Ahmad, A. A preliminary study on requirements of olfactory, haptic and audio enabled application for visually impaired in edutainment. In Proceedings of the 2019 IEEE 9th Symposium on Computer Applications & Industrial Electronics (ISCAIE), Kota Kinabalu, Malaysia, 27–28 April 2019. [ Google Scholar ] [ CrossRef ]
  • Rehrl, K.; Göll, N.; Leitinger, S.; Bruntsch, S.; Mentz, H.-J. Smartphone-based information and navigation aids for public transport travellers. In Location Based Services and TeleCartography ; Springer: Berlin/Heidelberg, Germany, 2007. [ Google Scholar ] [ CrossRef ]
  • Zhou, J.; Yeung, W.M.-C.; Ng, J.K.-Y. Enhancing indoor positioning accuracy by utilizing signals from both the mobile phone network and the wireless local area network. In Proceedings of the 22nd International Conference on Advanced Information Networking and Applications (Aina 2008), Okinawa, Japan, 25–28 March 2008. [ Google Scholar ] [ CrossRef ]
  • Karkar, A.; Al-Maadeed, S. Mobile Assistive Technologies for Visual Impaired Users: A Survey. In Proceedings of the 2018 International Conference on Computer and Applications (ICCA), Beirut, Lebanon, 25–26 August 2018. [ Google Scholar ] [ CrossRef ]
  • Menelas, B.; Picinalli, L.; Katz, B.F.G.; Bourdot, P. Audio haptic feedbacks for an acquisition task in a multi-target context. In Proceedings of the 2010 IEEE Symposium on 3D User Interfaces (3DUI 2010), Waltham, MA, USA, 20–21 March 2010. [ Google Scholar ] [ CrossRef ]
  • Liu, Z.; Li, C.; Wu, D.; Dai, W.; Geng, S.; Ding, Q. A wireless sensor network based personnel positioning scheme in coal mines with blind areas. Sensors 2010 , 10 , 9891–9918. [ Google Scholar ] [ CrossRef ]
  • Hairuman, I.F.B.; Foong, O.-M. OCR signage recognition with skew & slant correction for visually impaired people. In Proceedings of the 2011 11th International Conference on Hybrid Intelligent Systems (HIS 2011), Melacca, Malaysia, 5–8 December 2011; pp. 306–310. [ Google Scholar ] [ CrossRef ]
  • Messaoudi, M.D.; Menelas, B.-A.J.; Mcheick, H. Autonomous Smart White Cane Navigation System for Indoor Usage. Technologies 2020 , 8 , 37. [ Google Scholar ] [ CrossRef ]
  • A Cellphone Based Indoor wayfindingsystem for the Visually Challenged. Available online: https://assistech.iitd.ac.in/doc/Roshni_Pamphlet.pdf (accessed on 19 May 2024).
  • Chen, Q.; Khan, M.; Tsangouri, C.; Yang, C.; Li, B.; Xiao, J.; Zhu, Z. CCNY Smart Cane. In Proceedings of the 2017 IEEE 7th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Honolulu, HI, USA, 31 July–4 August 2017; pp. 1246–1251. [ Google Scholar ] [ CrossRef ]
  • Aladren, A.; Lopez-Nicolas, G.; Puig, L.; Guerrero, J.J. Navigation Assistance for the Visually Impaired Using RGB-D Sensor with Range Expansion. IEEE Syst. J. 2016 , 10 , 922–932. [ Google Scholar ] [ CrossRef ]
  • Teng, C.-F.; Chen, Y.-L. Syndrome-Enabled Unsupervised Learning for Neural Network-Based Polar Decoder and Jointly Optimized Blind Equalizer. IEEE J. Emerg. Sel. Top. Circuits Syst. 2020 , 10 , 177–188. [ Google Scholar ] [ CrossRef ]
  • Bai, J.; Liu, D.; Su, G.; Fu, Z. A cloud and vision-based navigation system used for blind people. In Proceedings of the AIACT ‘17: 2017 International Conference on Artificial Intelligence, Automation and Control Technologies, Wuhan, China, 7–9 April 2017; p. 22. [ Google Scholar ] [ CrossRef ]
  • Oladayo, O.O. A Multidimensional Walking Aid for Visually Impaired Using Ultrasonic Sensors Network with Voice Guidance. Int. J. Intell. Syst. Appl. 2014 , 6 , 53–59. [ Google Scholar ] [ CrossRef ]
  • Sahoo, N.; Lin, H.-W.; Chang, Y.-H. Design and implementation of a walking stick aid for visually challenged people. Sensors 2019 , 19 , 130. [ Google Scholar ] [ CrossRef ]
  • Kuc, R. Binaural sonar electronic travel aid provides vibrotactile cues for landmark, reflector motion and surface texture classification. IEEE Trans. Biomed. Eng. 2002 , 49 , 1173–1180. [ Google Scholar ] [ CrossRef ]
  • Nivishna, S.; Vivek, C. Smart indoor and outdoor guiding system for blind people using android and IOT. Indian J. Public Health Res. Dev. 2019 , 10 , 1108. [ Google Scholar ] [ CrossRef ]
  • Mahmud, N.; Saha, R.; Zafar, R.; Bhuian, M.; Sarwar, S. Vibration and voice operated navigation system for visually impaired person. In Proceedings of the 2014 International Conference on Informatics, Electronics & Vision (ICIEV), Dhaka, Bangladesh, 23–24 May 2014. [ Google Scholar ] [ CrossRef ]
  • Grubb, P.W.; Thomsen, P.R.; Hoxie, T.; Wright, G. Patents for Chemicals, Pharmaceuticals, and Biotechnology ; Oxford University Press (OUP): Oxford, UK, 2016. [ Google Scholar ] [ CrossRef ]
  • Kwiecień, A.; Maćkowski, M.; Kojder, M.; Manczyk, M. Reliability of Bluetooth Smart Technology for Indoor Localization System. Commun. Comput. Inf. Sci. 2015 , 522 , 444–454. [ Google Scholar ] [ CrossRef ]
  • Cominelli, M.; Patras, P.; Gringoli, F. Dead on Arrival: An empirical study of the Bluetooth 5.1 positioning system. In Proceedings of the MobiCom ‘19: The 25th Annual International Conference on Mobile Computing and Networking, Los Cabos, Mexico, 25 October 2019. [ Google Scholar ] [ CrossRef ]
  • Qian, M.; Zhao, K.; Seneviratne, A.; Li, B. Performance analysis of ble 5.1 new feature angle of arrival for relative positioning. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022 , 46 , 155–161. [ Google Scholar ] [ CrossRef ]
  • Leitch, S.G.; Ahmed, Q.Z.; Bin Abbas, W.; Hafeez, M.; Laziridis, P.I.; Sureephong, P.; Alade, T. On Indoor Localization Using WiFi, BLE, UWB, and IMU Technologies. Sensors 2023 , 23 , 8598. [ Google Scholar ] [ CrossRef ]
  • Bandukda, M.; Singh, A.; Berthouze, N.; Holloway, C. Understanding Experiences of blind individuals in outdoor nature. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings, Glasgow, UK, 4–9 May 2019; p. LBW1711. [ Google Scholar ] [ CrossRef ]
  • Chen, Z.; Xu, A.; Sui, X.; Wang, C.; Wang, S.; Gao, J.; Shi, Z. Improved-UWB/LiDAR-SLAM Tightly Coupled Positioning System with NLOS Identification Using a LiDAR Point Cloud in GNSS-Denied Environments. Remote Sens. 2022 , 14 , 1380. [ Google Scholar ] [ CrossRef ]
  • Martins, P.; Abbasi, M.; Sa, F.; Celiclio, J.; Morgado, F.; Caldeira, F. Intelligent beacon location and fingerprinting. Procedia Comput. Sci. 2019 , 151 , 9–16. [ Google Scholar ] [ CrossRef ]
  • Better Together: How Combining UWB and Bluetooth Low Energy Enables Innovation|NXP Semiconductors. Available online: https://www.nxp.com/company/blog/better-together-how-combining-uwb-and-bluetooth-low-energy-enables-innovation:BL-UWB-AND-BLUETOOTH-LOW-ENERGY (accessed on 18 May 2024).
  • Farooq-i-Azam, M.; Ayyaz, M.N. Location and Position Estimation in Wireless Sensor Networks ; ResearchGate: Berlin, Germany, 2016; pp. 179–214. [ Google Scholar ] [ CrossRef ]
  • Che, F.; Ahmed, Q.Z.; Lazaridis, P.I.; Sureephong, P.; Alade, T. Indoor Positioning System (IPS) Using Ultra-Wide Bandwidth (UWB)—For Industrial Internet of Things (IIoT). Sensors 2023 , 23 , 5710. [ Google Scholar ] [ CrossRef ]
  • Kim, H. Double-sided two-way ranging algorithm to reduce ranging time. IEEE Commun. Lett. 2009 , 13 , 486–488. [ Google Scholar ] [ CrossRef ]
  • Ferrari, P.; Flammini, A.; Sisinni, E.; Depari, A.; Rizzi, M.; Exel, R.; Sauter, T. Timestamping and Ranging Performance for IEEE 802.15.4 CSS Systems. IEEE Trans. Instrum. Meas. 2014 , 63 , 1244–1252. [ Google Scholar ] [ CrossRef ]
  • Fakhoury, S.; Ismail, K. Ultra-Wideband-Based Time Occupancy Analysis for Safety Studies. Sensors 2023 , 23 , 7551. [ Google Scholar ] [ CrossRef ]
  • Nearby Interaction|Apple Developer Documentation. Available online: https://developer.apple.com/documentation/nearbyinteraction (accessed on 18 May 2024).
  • GitHub—Estimote/iOS-Estimote-UWB-SDK: iOS SDK and Demo App for Estimote UWB Beacons. Available online: https://github.com/Estimote/iOS-Estimote-UWB-SDK (accessed on 19 May 2024).
  • isCameraAssistanceEnabled|Apple Developer Documentation. Available online: https://developer.apple.com/documentation/nearbyinteraction/ninearbypeerconfiguration/4013050-iscameraassistanceenabled (accessed on 18 May 2024).
  • Delamare, M.; Boutteau, R.; Savatier, X.; Iriart, N. Static and Dynamic Evaluation of an UWB Localization System for Industrial Applications. Sci 2020 , 2 , 23. [ Google Scholar ] [ CrossRef ]
  • Smartphones with UWB: Evaluating the Accuracy and Reliability of UWB Ranging|Request PDF. Available online: https://www.researchgate.net/publication/369379894_Smartphones_with_UWB_Evaluating_the_Accuracy_and_Reliability_of_UWB_Ranging (accessed on 18 May 2024).
  • (PDF) High-Accuracy Ranging and Localization with Ultra-Wideband Communications for Energy-Constrained Devices. Available online: https://www.researchgate.net/publication/351062898_High-Accuracy_Ranging_and_Localization_with_Ultra-Wideband_Communications_for_Energy-Constrained_Devices (accessed on 18 May 2024).
  • Flueratoru, L.; Wehrli, S.; Magno, M.; Niculescu, D. On the Energy Consumption and Ranging Accuracy of Ultra-Wideband Physical Interfaces. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, China, 7–11 December 2020; pp. 1–7. [ Google Scholar ] [ CrossRef ]
  • Wang, F.; Tang, H.; Chen, J. Survey on NLOS Identification and Error Mitigation for UWB Indoor Positioning. Electronics 2023 , 12 , 1678. [ Google Scholar ] [ CrossRef ]
  • Decarli, N.O.; Wilab, D.D.; Gezici, S.; Alberto, A.; Amico, D. LOS/NLOS Detection for UWB Signals: A Comparative Study Using Experimental Data. In Proceedings of the IEEE 5th International Symposium on Wireless Pervasive Computing 2010, Modena, Italy, 5–7 May 2010. [ Google Scholar ]
  • Crețu-Sîrcu, A.L.; Schiøler, H.; Cederholm, J.P.; Sîrcu, I.; Schjørring, A.; Larrad, I.R.; Berardinelli, G.; Madsen, O. Evaluation and Comparison of Ultrasonic and UWB Technology for Indoor Localization in an Industrial Environment. Sensors 2022 , 22 , 2927. [ Google Scholar ] [ CrossRef ]
  • Wan, Q.; Wu, T.; Zhang, K.; Liu, X.; Cheng, K.; Liu, J.; Zhu, J. A high precision indoor positioning system of BLE AOA based on ISSS algorithm. Measurement 2024 , 224 , 113801. [ Google Scholar ] [ CrossRef ]
  • Woolley, M. Bluetooth Core Specification v5.1 Feature Overview ; Bluetooth Core Specification v5.1 Contains a Series of Updates to the Bluetooth ® Core Specification. This Document Summarizes and Explains Each Change. Bluetooth Core Specification v5.1 Should be Consulted for Full Details; Bluetooth SIG: Kirkland, WA, USA, 2019. [ Google Scholar ]
  • Ramirez, R.; Huang, C.-Y.; Liao, C.-A.; Lin, P.-T.; Lin, H.-W.; Liang, S.-H. A Practice of BLE RSSI Measurement for Indoor Positioning. Sensors 2021 , 21 , 5181. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • You, W.; Li, F.; Liao, L.; Huang, M. Data Fusion of UWB and IMU Based on Unscented Kalman Filter for Indoor Localization of Quadrotor UAV. IEEE Access 2020 , 8 , 64971–64981. [ Google Scholar ] [ CrossRef ]
  • Woo, H.-J.; Seo, D.-M.; Kim, M.-S.; Park, M.-S.; Hong, W.-H.; Baek, S.-C. Localization of Cracks in Concrete Structures Using an Unmanned Aerial Vehicle. Sensors 2022 , 22 , 6711. [ Google Scholar ] [ CrossRef ]
  • UWB and AI overview—By Giorgio Zanella—Technotrend. Available online: https://technotrend.substack.com/p/uwb-and-ai-overview (accessed on 18 June 2024).

Click here to enlarge figure

Time-based:ToATDoATW-ToAPoA
Signal-based:RSSICSI
Angulation:AoAAoD
Proximity Detection:RFIDCell-ID
Distance
0.5 [m]
Distance
1 [m]
Distance
1.5 [m]
Distance
2 [m]
Distance
2.5 [m]
count93760210241039993
mean0.581.101.421.882.39
std0.010.010.020.010.02
min0.551.050.991.842.32
max0.621.131.461.922.44
Distance with CameraAssist
(0.5 [m])
Distance without CameraAssist
(0.5 [m])
Distance with Camera Assist
(2 [m])
Distance without CameraAssist
(2 [m])
mean0.320.581.801.88
std0.010.010.010.01
min0.290.551.791.84
max0.360.621.811.92
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Rosiak, M.; Kawulok, M.; Maćkowski, M. The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired Individuals. Appl. Sci. 2024 , 14 , 5646. https://doi.org/10.3390/app14135646

Rosiak M, Kawulok M, Maćkowski M. The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired Individuals. Applied Sciences . 2024; 14(13):5646. https://doi.org/10.3390/app14135646

Rosiak, Maria, Mateusz Kawulok, and Michał Maćkowski. 2024. "The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired Individuals" Applied Sciences 14, no. 13: 5646. https://doi.org/10.3390/app14135646

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

A Low-Cost Indoor Navigation and Tracking System Based on Wi-Fi-RSSI

  • Published: 27 June 2024

Cite this article

research paper on indoor positioning system

  • Nina Siti Aminah   ORCID: orcid.org/0000-0003-4725-0130 1 ,
  • Arsharizka Syahadati Ichwanda 1 ,
  • Daryanda Dwiammardi Djamal 1 ,
  • Yohanes Baptista Wijaya Budiharto 1 &
  • Maman Budiman 1  

Explore all metrics

In the recent years, the number of smartphone users has increased dramatically every year. Smartphones produce a variety of services including indoor navigation and tracking using the Received Signal Strength Indicator (RSSI) value of the Wi-Fi (Wireless Fidelity) routers to estimate user position. In this research, we developed a navigation and tracking system using a Fingerprint map and k-Nearest Neighbor (k-NN) algorithm. In that way, we can help the user to go through the nearest path to user destination by using Dijkstra’s algorithm. These features are displayed in the form of an RSSI-based navigation application on an Android smartphone. At the same time, estimated position of user of this navigation app will be sent to server and viewed in a real time website application. This system helps to assist visitors in finding their way in a complex building and at the same time it allows building owners record and analyze visitor movement. One key benefit of the system is its low initial cost. It only utilizes the existing Wi-Fi infrastructure. Experimental results show that this system can reach an accuracy up to 78% and distance errors less than 3 m.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research paper on indoor positioning system

Data availability

The data used to support the findings of this study are included within the article.

Spachos, P., & Plataniotis, K. N. (2020). BLE beacons for indoor positioning at an interactive IoT-based smart museum. IEEE System Journal Advance, 14 (3), 3483–3493.

Article   Google Scholar  

Raj, R., Saxena, K., & Dixit, A. (2020). Passive optical identifiers for VLC-based indoor positioning systems: design, hardware simulation, and performance analysis. IEEE System Journal Advance, 15 (3), 3208–3219.

Chen, Z., Zou, H., Yang, J., Jiang, H., & Xie, L. (2020). WiFi fingerprinting indoor localization using local feature-based deep LSTM. IEEE System Journal Advance, 14 (2), 3001–3010.

Han, S., Li, Y., Meng, W., Li, C., Liu, T., & Zhang, Y. (2019). Indoor Localization with a single Wi-Fi access point based on OFDM-MIMO. IEEE System Journal. Advance., 13 (1), 964–972.

Uradzinksi, M., Guo, H., Liu, X., & Yu, M. (2017). Advanced indoor positioning using zigbee wireless technology. Wireless Personal Communications Advance, 97 (4), 6509–6518.

Chung, C.-K., Chung, I.-Q., Wang, Y.-H., and Chang, C.-T. (2016). The integrated applications of WIFI and APP used in the shopping mall environment for menber card E-marketing. In 2016 International Conference on Machine Learning and Cybernetics (ICMLC). https://doi.org/10.1109/ICMLC.2016.7872968 .

Potts, J. (2014). Economics of public WiFi. Retrieved January 7, 2023, from Retrieved January 7, 2020, from https://telsoc.org/journal/ajtde-v2-n1/a20 .

Holst, A. (2019). Number of smartphone users worldwide from 2016 to 2021. Retrieved January 7, 2022 from https://www.statista.com/statistics/330695/number-of-smartphone-users-worldwide .

Dea, S. O’. (2020). Market share of mobile operating systems worldwide 2012–2019. Retrieved July 10, 2020 from https://www.statista.com/statistics/272698/global-market-share-held-by-mobile-operating-systems-since-2009 .

Zhuang, Y., Yang, J., Li, Y., Qi, L., & El-Sheimy, N. (2016). Smartphone-based indoor localization with Bluetooth low energy beacons. Sensors . https://doi.org/10.3390/s16050596

Wang, Y., Ye, Q., Cheng, J., and Wang, L. (2015). RSSI-based bluetooth indoor localization. In 11th International conference on mobile ad-hoc and sensor networks (MSN). https://doi.org/10.1109/MSN.2015.14 .

Yoon, S., Lee, K., Yun, Y., & Rhee, I. (2016). ACMI: FM-based indoor localization via autonomous fingerprinting. IEEE Transactions on Mobile Computing . https://doi.org/10.1109/TMC.2015.2465372

Dian, Z., Kezhong, L., & Rui, M. (2015). A precise RFID indoor localization system with sensor network assistance. China Communications, 12 (4), 13–22. https://doi.org/10.1109/CC.2015.7114062

Seco, F., & Jiménez, A. R. (2018). Smartphone-based cooperative indoor localization with RFID technology. Sensors, 18 (1), 266. https://doi.org/10.3390/s18010266

He, S., & Chan, S.-G. (2016). Wi-Fi fingerprint-based indoor positioning: Recent advances and comparisons. IEEE Communications Survey Tutorials, 18 (1), 466–490. https://doi.org/10.1109/COMST.2015.2464084

RSSI-Based indoor localization with the internet of things: IEEE Journals & Magazine. Retrieved December 22, 2020. https://ieeexplore.ieee.org/document/8371230 .

Zayets, A. and Steinbach, E. (2017). Robust WiFi-based indoor localization using multipath component analysis, In 2017 International conference on indoor positioning and indoor navigation (IPIN), (pp. 1–8), https://doi.org/10.1109/IPIN.2017.8115943 .

Li, Z., Braun, T., Dimitrova, D. C. (2015). A passive WiFi source localization system based on fine-grained power-based trilateration, In 2015 IEEE 16th International symposium on a world of wireless, mobile and multimedia networks (WoWMoM), (pp. 1–9), https://doi.org/10.1109/WoWMoM.2015.7158147 .

Xingbin Ge and Zhiyi Qu, (2016). Optimization WIFI indoor positioning KNN algorithm location-based fingerprint, In 2016 7th IEEE International conference on software engineering and service science (ICSESS), (pp. 135–137), https://doi.org/10.1109/ICSESS.2016.7883033 .

Kasantikul, K., Xiu, C., Yang, D., Yang, M. (2015). An enhanced technique for indoor navigation system based on WIFI-RSSI. In 2015 Seventh international conference on ubiquitous and future networks (pp. 513-518). IEEE

Wang, D., Zhao, F., Wang, T., and Zhang, X., (2018). WiFi fingerprint based indoor localization with iterative weighted KNN for WiFi AP missing, In 2018 IEEE 88th vehicular technology conference (VTC-Fall), (pp. 1–5), https://doi.org/10.1109/VTCFall.2018.8690648 .

Danalet, A., Farooq, B., & Bierlaire, M. (2014). A Bayesian approach to detect pedestrian destination-sequences from WiFi signatures. Transportation Research Part C: Emerging Technology, 44 , 146–170. https://doi.org/10.1016/j.trc.2014.03.015

WiFi based indoor localization: application and comparison of machine learning algorithms: IEEE conference publication. Retrieved December 22, 2020. https://ieeexplore.ieee.org/abstract/document/8543125 .

Jang, J. and Hong, S., (2018). Indoor localization with WiFi fingerprinting using convolutional neural network, In 2018 Tenth international conference on ubiquitous and future networks (ICUFN), (pp. 753–758), https://doi.org/10.1109/ICUFN.2018.8436598 .

Wang, X., Yu, Z., and Mao, S. (2018). DeepML: Deep LSTM for indoor localization with smartphone magnetic and light sensors, In 2018 IEEE International conference on communications (ICC), (pp. 1–6), https://doi.org/10.1109/ICC.2018.8422562 .

“Sensors|Free Full-Text|Smart CEI Moncloa: An IoT-based platform for people flow and environmental monitoring on a smart university campus|HTML. Retrieved December 22, 2020. https://www.mdpi.com/1424-8220/17/12/2856/htm .

Jana, S. and Chattopadhyay, M. (2015). An event-driven university campus navigation system on android platform, In 2015 Applications and innovations in mobile computing (AIMoC), (pp. 182–187), https://doi.org/10.1109/AIMOC.2015.7083850 .

Huang, J., Zhan, Y., Cui, W., Yuan, Y., and Qi, P. (2010). Development of a campus information navigation system based on GIS, In 2010 International conference on computer design and applications, (vol. 5, pp. V5–491-V5–494), https://doi.org/10.1109/ICCDA.2010.5541049 .

Kasantikul, K., Xiu, C., Yang, D. and Yang, M. (2015). An enhanced technique for indoor navigation system based on WIFI-RSSI, In 2015 Seventh international conference on ubiquitous and future networks, (pp. 513–518), https://doi.org/10.1109/ICUFN.2015.7182597 .

Li, D., Zhang, B., Yao, Z. and Li, C. (2014). A feature scaling based k-nearest neighbor algorithm for indoor positioning system, In 2014 IEEE global communications conference, (pp. 436–441), https://doi.org/10.1109/GLOCOM.2014.7036847 .

Chuang Ruan, Jianping Luo, and Yu Wu. (2014). Map navigation system based on optimal Dijkstra algorithm, In 2014 IEEE 3rd International conference on cloud computing and intelligence systems, (pp. 559–564), https://doi.org/10.1109/CCIS.2014.7175798 .

Colter, J. A. (2016). Evaluating and improving the usability of MIT App Inventor, Doctoral dissertation, Massachusetts Institute of Technology, Massachusetts.

Adiono, T., Anindya, S. F., Fuada, S., Afifah, K., & Purwanda, I. G. (2019). Efficient android software development using MIT app inventor 2 for bluetooth-based smart home. Wireless Personal Communications, 105 (1), 233–256. https://doi.org/10.1007/s11277-018-6110-x

Aminah, N. S., Ichwanda, A. S., Djamal, D. D., Budiharto, Y. B. W., and Budiman, M. (2020). Improving indoor positioning precision by using modified weighted centroid localization, In: International Conference on Science, Infrastructure Technology and Regional Development (ICoSITeR).

Rahman, K., Ghani, N. A., Kamil, A. A., & Mustafa, A. (2020). Analysis of pedestrian free flow walking speed in a least developing country: A factorial design study”. Research Journal of Applied Sciences, Engineering and Technology., 4 (21), 4299–4304.

Google Scholar  

A Real-time indoor tracking system in smartphones. In Proceedings of the 19th ACM international conference on modeling, analysis and simulation of wireless and mobile systems.

Zou, H., Chen, Z., Jiang, H., Xie, L. and Spanos, C. (2017). Accurate indoor localization and tracking using mobile phone inertial sensors, WiFi and iBeacon,” In 2017 IEEE International symposium on inertial sensors and systems (INERTIAL), Kauai, HI, USA, (pp. 1–4), https://doi.org/10.1109/ISISS.2017.7935650 .

Download references

This research is fully funded by the Indonesian Ministry of Research and Technology/National Agency for Research and Innovation, and Indonesian Ministry of Education and Culture, under World Class University Program managed by Institut Teknologi Bandung. The authors have no relevant financial or non-financial interests to disclose.

Author information

Authors and affiliations.

Internet of Things Laboratory, Physics Program Study, Institut Teknologi Bandung, Jl. Ganesha 10, Bandung, 40132, Indonesia

Nina Siti Aminah, Arsharizka Syahadati Ichwanda, Daryanda Dwiammardi Djamal, Yohanes Baptista Wijaya Budiharto & Maman Budiman

You can also search for this author in PubMed   Google Scholar

Contributions

A.S. Ichwanda, D.D. Djamal, and Y.B.W. Budiharto contributed for Interpretation of results preparation of figures, N.S. Aminah wrote the main manuscript, M. Budiman reviewed the results and approved the final version of the manuscript.

Corresponding author

Correspondence to Nina Siti Aminah .

Ethics declarations

Conflict of interest.

No, I declare that the authors have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.

Consent for Publication

The authors certify that this material or similar material has not been and will not be submitted to or published in any other publication before. Furthermore, the authors certify that they have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Aminah, N.S., Ichwanda, A.S., Djamal, D.D. et al. A Low-Cost Indoor Navigation and Tracking System Based on Wi-Fi-RSSI. Wireless Pers Commun (2024). https://doi.org/10.1007/s11277-024-11361-3

Download citation

Accepted : 13 June 2024

Published : 27 June 2024

DOI : https://doi.org/10.1007/s11277-024-11361-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • k-NN algorithm
  • Find a journal
  • Publish with us
  • Track your research

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. The indoor positioning system structure.

    research paper on indoor positioning system

  2. (PDF) Indoor positioning: Technology comparison analysis

    research paper on indoor positioning system

  3. (PDF) WLAN-BLE Based Indoor Positioning System using Machine Learning

    research paper on indoor positioning system

  4. (PDF) Indoor Positioning System: A Review

    research paper on indoor positioning system

  5. (PDF) Indoor Positioning System using Bluetooth

    research paper on indoor positioning system

  6. (PDF) Survey on Indoor Positioning Techniques and Systems

    research paper on indoor positioning system

VIDEO

  1. Vehicle Detection and Pose Estimation for Autonomous Driving

  2. VIPS indoor positioning system tracking a drone

  3. Paper Cutter with laser positioning light

  4. Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor

  5. Indoor Path Following

  6. Flutter BLE indoor position system using trilateration method (raw RSSI)

COMMENTS

  1. (PDF) Indoor Positioning System: A Review

    This paper introduces a review article on indoor positioning technologies, algorithms, and techniques. This review paper is expected to deliver a better understanding to the reader and compared ...

  2. (PDF) Analysis of an indoor positioning systems

    Indoor Positioning System (IPS) is a system of network connected devices which is used. for wireless locationing of objects and persons inside building s and partly covered areas. (Lemmens, 2013 ...

  3. Indoor positioning and wayfinding systems: a survey

    Navigation systems help users access unfamiliar environments. Current technological advancements enable users to encapsulate these systems in handheld devices, which effectively increases the popularity of navigation systems and the number of users. In indoor environments, lack of Global Positioning System (GPS) signals and line of sight with orbiting satellites makes navigation more ...

  4. A Meta-Review of Indoor Positioning Systems

    The paper proceeds with a short introduction to indoor positioning systems (), including the definition of accuracy and other IPS evaluation metrics.Later, Section 3 describes the main technologies used for indoor positioning, including the description of techniques and some methods applied to each of them. Section 4 then addresses smartphone-based indoor positioning using WiFi and BLE ...

  5. A Survey of Indoor Location Technologies, Techniques and Applications

    Abstract. The recent academic research surrounding indoor positioning systems (IPS) and indoor location-based services (ILBS) are reviewed to establish the current state-of-the-art for IPS and ILBS. This review is focused on the use of IPS / ILBS for cyber-physical systems to support secure and safe asset management (including people as assets ...

  6. A survey of indoor positioning systems based on a six-layer model

    Complementing the previous survey papers, this paper provides a survey of the latest research works on indoor positioning based on the six-layer model. Our emphasis is on systematic categorisation, machine learning-based enhancements, collaborative localisation and COVID-19-related applications.

  7. Indoor positioning systems in hospitals: A scoping review

    Objective. This research examines relevant studies regarding Indoor Positioning Systems (IPS) in hospitals and IPS that are designed for hospitals and in preparation for implementation, by investigating the respective technologies, techniques, prediction-improving methods, evaluation results, and limitations of the IPS.

  8. (PDF) A Meta-Review of Indoor Positioning Systems

    This paper provides a meta-review that performed a comprehensive compilation of 62 survey papers in the area of indoor positioning. The paper provides the reader with an introduction to IPS and ...

  9. Collaborative Indoor Positioning Systems: A Systematic Review

    Research and development in Collaborative Indoor Positioning Systems (CIPSs) is growing steadily due to their potential to improve on the performance of their non-collaborative counterparts. In contrast to the outdoors scenario, where Global Navigation Satellite System is widely adopted, in (collaborative) indoor positioning systems a large variety of technologies, techniques, and methods is ...

  10. Collaborative Indoor Positioning Systems: A Systematic Review

    Research and development in Collaborative Indoor Positioning Systems (CIPSs) is growing steadily due to their potential to improve on the performance of their non-collaborative counterparts. ... the diversity of evaluation procedures and scenarios hinders a direct comparison. This paper presents a systematic review that gives a general view of ...

  11. Research on indoor positioning system algorithm based ...

    Fig. 1. UWB indoor positioning system diagram. In the system, the tag transmits and receives UWB signals through the internal chip and the complex electronic circuit around the chip, and communicates with the base station in real time to obtain its own location information. Before positioning, the secondary base station is placed in a ...

  12. Overview of indoor positioning system technologies

    Constant changes in the field of indoor positioning systems (IPS) dictate that we keep up with new and developing trends and technologies. With improving accuracy, IPS found widespread application in commercial environments, such as asset or personnel tracking. However, it is yet to be established in the everyday personal usage applications. In this paper, we present a technological overview ...

  13. Light-Based Indoor Positioning Systems: A Review

    Since the well-established navigation systems such as GPS are ineffective in indoor environments, research into developing novel indoor positioning technologies has emerged in recent years. While several technologies are being investigated, a practical and reliable indoor positioning system is yet to emerge. Indoor positioning using light signals holds a great potential to provide a reliable ...

  14. Indoor navigation: state of the art and future trends

    This paper reviews the state of the art and future trends of indoor Positioning, Localization, and Navigation (PLAN). It covers the requirements, the main players, sensors, and techniques for indoor PLAN. Other than the navigation sensors such as Inertial Navigation System (INS) and Global Navigation Satellite System (GNSS), the environmental-perception sensors such as High-Definition map (HD ...

  15. A Review of Indoor Positioning Systems (IPS) and Their ...

    This paper cites the different indoor positioning systems actually used with an emphasis on the benefits, drawbacks and limitations of each one. It essentially aims to highlight the IPS which gives the best result in terms of quality, price, response time, distance, and other choice criteria previously mentioned.

  16. Overview of WiFi fingerprinting‐based indoor positioning

    So far, almost all WiFi indoor positioning systems are based on the received signal strength (RSS). In general, these systems can be classified into two categories: (1) indoor positioning based on the measurement of the distance (range-based method) and (2) indoor positioning based on a WiFi fingerprint. 3.1 Range-based WiFi indoor positioning ...

  17. Indoor Positioning Systems: A Blessing for Seamless Object

    This article is part of the Research Topic Extracting Insights from Digital Public Health Data using Artificial Intelligence View all 15 articles. Indoor Positioning Systems: A Blessing for Seamless Object Identification, Monitoring, and Tracking ... The continual proliferation of indoor positioning technology is laid out in this paper along ...

  18. Research and development of indoor positioning

    Indoor positioning systems have been sufficiently researched to provide location information of persons and devices. This paper is focused on the current research and further development of indoor positioning. The standard evolution and industry development are summarized. There are various positioning systems according to the scenarios, cost and accuracy. However, there is a basic positioning ...

  19. Free Full-Text

    An accurate and reliable Indoor Positioning System (IPS) applicable to most indoor scenarios has been sought for many years. The number of technologies, techniques, and approaches in general used in IPS proposals is remarkable. Such diversity, coupled with the lack of strict and verifiable evaluations, leads to difficulties for appreciating the true value of most proposals. This paper provides ...

  20. (PDF) Indoor positioning and wayfinding systems: a survey

    In particular, the paper reviews different computer vision-based indoor navigation and positioning systems along with indoor scene recognition methods that can aid the indoor navigation.

  21. A Systematic Literature Review of Indoor Position System Accuracy and

    Most researcher still face problems in localization technique in building known as Indoor Position System (IPS) implementation, especially to escalation the precision of techniques. In this paper, we presented a assessment of the papers (2006-2016) on indoor position system that focus on accuracy and placement of sensor for both BLE and Wi-Fi.

  22. The Effectiveness of UWB-Based Indoor Positioning Systems for the

    The aim of this paper and research into the accuracy of beacons that are based on the fusion of BLE and UWB using two-way ranging (TWR) is to analyze and evaluate the influence of various factors on the accuracy of distance measurements. ... "The Effectiveness of UWB-Based Indoor Positioning Systems for the Navigation of Visually Impaired ...

  23. A Survey on Scalable Wireless Indoor Localization ...

    Indoor environments are crucial in our daily lives, spanning essential services in both civilian and military sectors. With the Global Positioning System (GPS) proving unreliable indoors due to physical obstructions, research has increasingly focused on wireless signal-based indoor localization systems as effective alternatives [].A plethora of review studies on wireless indoor localization ...

  24. A survey on indoor positioning systems

    This paper aims to provide the reader with a review of the main technologies explored in the literature to solve the indoor localization issue. Furthermore, some systems that use these enabling technologies in real-world scenarios are presented and discussed. This could deliver a better understanding of the state-of-the-art and motivate new research efforts in this promising field. Finally ...

  25. A Low-Cost Indoor Navigation and Tracking System Based on Wi ...

    In the recent years, the number of smartphone users has increased dramatically every year. Smartphones produce a variety of services including indoor navigation and tracking using the Received Signal Strength Indicator (RSSI) value of the Wi-Fi (Wireless Fidelity) routers to estimate user position. In this research, we developed a navigation and tracking system using a Fingerprint map and k ...

  26. Research on Indoor Positioning System Based on UWB Technology

    This paper proposes the research of high-precision indoor positioning system based on UWB. Compared with traditional indoor wireless positioning technology, this technology has the function of providing high-precision positioning information for multiple positioning points in real time at the same time. First use UWB ultra-wideband technology to calculate the position distance based on the ...