Search This Blog

Thursday, May 24, 2018

PHOTOGRAMMETRIC SURVEYING AND 3D MODELING USING UNMANNED AERIAL VEHICLE ( DRONES ) --- PART 5



STEPS OF DATA PROCESSING FOR ANY UAV BASED WORK

 

All acquired images from digital camera were downloaded into the computer after flight mission. Each image was saved in jpeg file. The quality of images was checked before they were used in the processing stage. Some of the images might have some quality problem such as blurring image and colour balancing error which was caused during flight mission. These problems usually arise from the attitude of the UAV during flight. If the quality of all images were very bad, another flight mission might need to be done. However, in this study, all acquired images were in good quality and they were being preceded for the photogrammetric processing. UAV Processing software is able to process aerial images and to produce digital ortho-photo and digital elevation model (DEM) for the study area. Photogrammetric technique involves many processes such as interior orientation, relative orientation, aerial triangulation and bundle adjustment. Interior orientation requires the information of camera parameters including pixel size, focal length and principal points coordinates. All of these parameters were being defined before the processing stage. Relative orientation involved image correlation algorithm in order to transfer the tie points between images. Tie points were responsible to align all acquired images in the same condition in which the images were taken during flight mission. Ground control points were established during image processing in order to project the result into local coordinate system. Ground control points were collected by using Real Time Kinematic-GPS (RTK-GPS).

In the present work, some task are performed with the purpose to analyse the possibility of acquire and use nadir images for 2D mapping and 3D modelling and to perform measurements and analysis using photogrammetry on the UAV acquired images. Compared different software that follows different workflows, in order to evaluate their effectiveness and weaknesses. The accuracy of aerial data is directly related to the spatial resolution of the input imagery. The high resolution images from UAV can compete with traditional aerial mapping solutions that set on highly accurate alignment and positioning sensors on board. The acquired data were processed using different software tools: Agisoft Photoscan Professional Pix4D, arc map 10.3 and drone deploy. The entire process is carried out almost automatically by these software tools, based on algorithms of photogrammetry and computer vision (CV) that allow to process a large amount of images in a fast and easy way, with a limited influence of the user on the resulting dense point cloud. All lead to the images alignment, generation of dense point clouds and, subsequently, to the production of a triangulated mesh and to DEMs and ortho-photo extraction. Generally, the input data required by these tools to perform the 3D model reconstruction process are only the acquired images and some GCPs, since it is not even necessary to know a-priori the exterior orientation parameters of the cameras. In this case the alignment performed by Photoscan was used as input.

                STEPS TO PROCESS THE UAV ACQUIRED DATA



Pix4Dcapture will automatically start downloading images to our phone or tablet after capturing the mission's final photo or we can directly take the images from the micro SD card. We can keep our drone and remote controller on and connected to our phone or tablet to wirelessly download our images.
1.      Add Photos: Processing of images with PhotoScan or any other software  includes the following main steps: Loading photos into PhotoScan, inspecting loaded images, removing unnecessary images, aligning photos, building dense point cloud, building mesh (3D polygonal model),generating texture, building tiled model, building digital elevation model, building orthomosaic, exporting results.
2.      Loading and align photos: After adding Photos by given command from the Workflow menu we have to optimize the  added photos and then align them  according to the camera position ,it can be automatically performed by the software by clicking on ALIGN photo command in workspace pane 

                I.   Open Reference pane using the corresponding command from the View menu.

             II.   Click  Import button on the Reference pane toolbar and select the file containing camera positions information in the Open dialog.
   The easiest way is to load simple character-separated file (*.txt, *.csv) that contains x- and y-coordinates and height for each camera position (camera orientation data, i.e. pitch, roll and yaw values, could also be imported, but the data is not obligatory to reference the model).
In the Import CSV dialog indicate the delimiter according to the structure of the file and select the row to start loading from. Note that # character indicates a commented line that is not counted while numbering the rows. Indicate for the program what parameter is specified in each column through setting correct column numbers in the Columns section of the dialog. Also it is recommended to specify valid coordinate system in the corresponding field for the values used for camera centres data.
         III .Also to check our settings in the sample data field in Import CSV dialog.
Click OK button. The data will be loaded into the Reference pane.

3. Check Camera Calibration: Open Tools Menu → Camera Calibration window

By default PhotoScan estimates intrinsic camera parameters during the camera alignment and optimization steps based on the Initial values derived from EXIF. In case pixel size and focal length (both in mm) are missing in the image EXIF and therefore in the camera calibration.

window, they can be input manually prior to the processing according to the data derived from the camera and lens specifications.
If pre-calibrated camera is used, it is possible to load calibration data in one of the supported formats using Load button in the window. To prevent the pre-calibrated values from being adjusted by PhotoScan during processing, it is necessary to check on Fix Calibration flag.
PhotoScan can process the images taken by different cameras in the same project. In this case in the left frame of the Camera Calibration window multiple camera groups will appear, split by default according to the image resolution, focal length and pixel size. Calibration groups may also be split manually if it is necessary. In case ultra-wide or fisheye angle lens is used, it is recommended to switch camera type from Frame (default) to fisheye value prior to processing.

4. Point cloud generation and Building dense cloud-PhotoScan allows generating and visualizing a dense point cloud model. Based on the estimated camera positions the program calculates depth information for each camera to be combined into a single dense point cloud.

5. Building mesh-PhotoScan supports several reconstruction methods and settings, which help to produce optimal re-constructions for a given data set.

7. Building DEM-PhotoScan allows generating and visualizing a digital elevation model (DEM). A DEM represents a surface model as a regular grid of height values. DEM can be rasterized from a dense point cloud, a sparse point cloud or a mesh. Most accurate results are calculated based on dense point cloud data. PhotoScan enables to perform DEM-based point, distance, area, volume measurements as well as generate cross-sections fora part of the scene selected by the user.
8. Building orthomosaic-Orthomosaic export is normally used for generation of high resolution imagery based on the source photos and reconstructed model. The most common application is aerial photographic survey data processing, but it may be also useful when a detailed view of the object is required.

9. Export Orthomosaic

Select Export Orthomosaic → Export JPEG/TIFF/PNG command from File menu.
Set the following recommended values for the parameters in the Export Orthomosaic dialog

Projection: Desired coordinate system
Pixel size: desired export resolution (please note that for WGS84 coordinate system units should be specified in degrees. Use Meters button to specify the resolution in meters).
Split in blocks: 10000 x 10000 (if the exported area is large it is recommended to enable Split in Blocks feature, since the memory consumption is rather high at exporting stage)
Region: set the boundaries of the model's part that should be projected and presented as orthomosaic. Also polygonal shapes drawn in the ortho view and marked as boundaries will be taken into account for the orthomosaic export.
TIFF compression and JPEG quality should be specified according to the job requirements. Big TIFF format allows to overcome the TIFF file size limit for the large ortho-mosaics, but itis not supported by some applications.

III. Click Export... button and then specify target file name and select type of the exported file (e.g. GeoTIFF). 

IV. Click Save button to start ortho-mosaic generation.
15. Export DEM: Select Export DEM → Export GeoTIFF/BIL/XYZ command from File menu.
Projection: Desired coordinate system
No-data value: value for not visible points; should be specified according to the requirements of the post processing application.
Pixel size: desired export resolution
Split in blocks: 10000 x 10000 (if the exported area is large, it is recommended to enable Split in blocks feature, since the memory consumption is rather high at exporting stage)
Region: set the boundaries of the model's part that should be projected and presented as DEM. Also polygonal shapes drawn in the ortho view and marked as boundaries will be taken into account for the DEM export.
Click Export... button and then specify target file name and select type of the exported file (e.g. GeoTIFF).
Click Save button to start DEM generation.


T

Wednesday, May 23, 2018

PHOTOGRAMMETRIC SURVEYING AND 3D MODELING USING UNMANNED AERIAL VEHICLE ( DRONES ) --- PART 4 ( UAV FLIGHT PLANNING)



FLIGHT PLANNING

For photogrammetric surveying and 3D modelling of the study area, the android applications Pix4d capture and drone deploy were capable of executing the flight plan. The flight planning was carried out based on different factors such as the overall area of the site to be covered, the precision level (spatial resolution), the overall flight time, and the speed limit, and the height of the buildings. The flying height and the flying view of a UAV totally depends upon 3 factors, the spatial resolution of the eventual images, the focal length of the camera and type of mission we are planning.



Flight plan requirements to carry out UAV based coverage over BIT Campus for 3-D modelling






       Pick a Time to Fly: one of the most important steps in using your drone to make a 3D model is to pick a good time to fly. Besides avoiding high winds or rain, it’s also important to pick a time with good lighting. The worst time of day? Too early or too late in the day because that’s when the shadows are longest and will have the greatest effect on the outcome of the model.

    Capture Nadir Imagery: Start by capturing nadir imagery, photos captured from directly above looking down, using the free Drone Deploy flight app (iOS or Android). He simply outlines the area he wants to fly on a base layer map, and the app generates a flight plan. Following a safety check, the drone automatically takes off, flies along the automated flight path capturing images and then lands.

Circle the Structure to Capture Oblique Imagery: if you’re making a 3D model of relatively flat terrain, an overhead flight might be sufficient to make a good model. However, if you’re modeling a structure or rock formation with steep, vertical or concave sides, overhead images don’t capture a good view of the sides of the structure. For this reason, Jeff recommends flying two additional orbital flights around the structure capturing oblique imagery to improve the quality of your model. When capturing oblique images it’s important to avoid capturing the horizon within your images. When Jeff makes these two orbital flights, he manually triggers the camera shutter to take each picture. However if you’re just starting out, you might experiment with flying very slowly and setting the camera through your drone’s flight app to automatically capture images every 3–5 seconds.


Process the Imagery to generate 3D Model: Upload all photos from all flights to Drone Deploy or any other software such as PhotoScan Agisoft or Pix4D capture and choose to process the imagery as a “structure.” After a few hours, Drone Deploys cloud-based processing stitched all the images together and the 3D model was complete.




 Fig: Camera position over study area

                           Technical issues and problem occurred during fly

Initially, while connecting the drone with the mobile app, connectivity problems were faced owing to version issues. This was due to not using the updated version of the mobile app interfacing the drone device. While using DJI Mavic Pro with Pix4D Capture version 2.9.0 running on iOS 11.1.2 on an iPhone 6, taken several attempts on double grid pattern for 3D modelling. The controller has twice indicated for "check app" and the app alerted that there is an error and it goes back into the landing mode.  In such case, one need to manually fly back to a safe landing point. Even after the successful pre-flight check sometimes the application encounter some technical problem during fly, this might be the internal issues of the application and need to be updated.

PHOTOGRAMMETRIC SURVEYING AND 3D MODELING USING UNMANNED AERIAL VEHICLE ( DRONES ) --- PART 3 ( UAV VOLUMETRIC ANALYSIS )


Volumetric analysis and Calculation for Accuracy check for UAV acquired Images

Volume measurement with is an extremely fast, accurate and cost-effective method to analyse volumes on our maps from any device. User tests have found that volume measurements with UAV acquired are accurate within 1-2% of traditional ground-based measurements.   PhotoScan calculates area of the surface as a  sum of areas for all the faces in the model, so if we have closed volume prior to calculation bottom area will be also calculated. Also note that side polygons will be also taken into account, so the result will be greater than area of 2D projection of the model to the XY plane.


Fig:selected area for volume calculation







Fig:selected area-close holes and calculated volume





Fig: sand on the ground (selected for volume calculation)

 
Fig:Part on the ground selected for volume calculation

Fig:  Actual view of the selected portion

 

Fig: volume calculation (close hole)

Negative value indicates than the surface normal are pointed inside the model as it is depth volume calculation. And it is a common situation for quarries or interior projectsThe area selected was around 125.5 meter square and volume is 9.78 meter cube.








Monday, May 21, 2018

PHOTOGRAMMETRIC SURVEYING AND 3D MODELING USING UNMANNED AERIAL VEHICLE ( DRONES ) --- PART 2 ( UAV Photogrammetric Survey )

                                                       DRONEWORK



                             UAV Photogrammetric survey
Photogrammetric approaches are is in use for the last 150 years. During that time, the photogrammetry experienced substantial growths from analogue to today's digital techniques (Uysal et al., 2015).  Photogrammetry helps in determining the geometric properties of objects from images. The output of photogrammetry is typically a map, a drawing or a 3D model of some real-world object or land mass. It is the art of making maps and precise measurements from photographs, especially from aerial surveying. Digital images captured from UAV are frequently processed using conventional methods or using the new and efficient techniques with the specific software. The advantages of using UAVs for photogrammetric surveying are that their quickness, position, and stabilisation can be controlled correctly, so successive, distortion-free above ground images of a site can be acquired, which can then be processed to create 3D point clouds, digital terrain models, contour maps, digital surface model or can also be merged into a 2D or 3D orthomosaic image.
                                 
The fundamental principle used by photogrammetry is aerial Triangulation. In this technique, we acquire data from at least two dissimilar points.  The persistence of taking images from more than 2 points is to generate lines of sight. Once these lines of sight are arranged, we linked them to trace a point where they encounter and thus analyse the coordinates of the desired point (Castillo et al., 2017). The ability to speedily take aerial image from vertical, horizontal, and oblique angles allows a high degree of accuracy and flexibility which basically cannot be achieved by any traditional means, or by terrestrial surveying (http://libguides.wustl.edu/drones4data).

Agisoft PhotoScan, Pix 4d mapper and other existing software supports measuring distances between control points, as well as of surface area and volume of the reconstructed 3D model. All these measurement are generally performed on the generated 3D mesh. For distance measurement the existing softwares enables measurements of direct distances between the points of the reconstructed 3D scene. The points used for distance measurement must be defined by placing markers in the corresponding locations. Model coordinate system must be also initialized before the distance measurements is performed. Alternatively, the model can be scaled based on known distance (scale bar) information. For measuring distance in the study site, we placed multiple markers above the buildings and in the front lawn, created scale bar from 3D view context menu and finally viewed the distance in estimated value mode in the Agisoft PhotoScan. Distance between cameras are also measured in the workspace in the similar manner. Surface area and volume means, or by terrestrial surveying (http://libguides.wustl.edu/drones4data).
                                     

                                     
                                 Figure 1 : snapshot showing markers taken above building and in lawn



                    Table 1: Details of the marker
Labels
X/East
Y/North
Z/Altitude(m)
Point 1
85.441448
23.412947
588.20
Point 2
85.441446
23.412949
574.12
Point 3
85.441471
23.412950
587.40
Point 4
85.441529
23.412773
587.14
Point 5
85.441621
23.412796
587.02
Point 6
85.441555
23.412972
587.95
Point 7
85.441508
23.413190
573.79
Point 8
85.441702
23.413247
573.65
Point 9
85.441476
23.413224
574.16
Point 11
85.441329
23.413388
575.14
Point 12
85.441412
23.413404
575.64





          

Table 2 Calculated distance

Labels


Distance(m)
 Point 1_Point 2
14.07
Point 7_Point 8
20.75
Point 4_Point 5
9.72
                                                
      

Agisoft PhotoScan, Pix 4d mapper and other existing software supports measuring distances between control points, as well as of surface area and volume of the reconstructed 3D model. All these measurement are generally performed on the generated 3D mesh. For distance measurement the existing softwares enables measurements of direct distances between the points of the reconstructed 3D scene. The points used for distance measurement must be defined by placing markers in the corresponding locations. Model coordinate system must be also initialized before the distance measurements is performed. Alternatively, the model can be scaled based on known distance (scale bar) information. For measuring distance in the study site, we placed multiple markers above the buildings and in the front lawn, created scale bar from 3D view context menu and finally viewed the distance in estimated value mode in the Agisoft PhotoScan. Distance between cameras are also measured in the workspace in the similar manner. Surface area and volume measurements of the reconstructed 3D model of the study area also performed after defining the scale and coordinate system of the scene. To measure surface area and volume, one has to use Measure Area and Volume command from the Tools menu. Surface area is measured in square meters, while mesh volume is measured in cubic meters. Volume measurement can be performed only for the models with closed geometry.


                                         
                                                




Watch  next part for more information .




PHOTOGRAMMETRIC SURVEYING AND 3D MODELING USING UNMANNED AERIAL VEHICLE ( DRONES ) --- PART 5

STEPS OF DATA PROCESSING FOR ANY UAV BASED WORK   All acquired images from digital camera were downloaded into the computer after f...