Top Menu

Photogrammetry with the DJI Phantom Vision 2 Plus

This article has been revised and updated. A new step has been included to guide into lense correction. Another guide has also been included on using Pix4D Mapper Pro, – the only photogrammetry software which we recommend for use with the DJI if you want consistently good results. Examples from our tests were added. Thanks for all your feedback. (last updated August 27, 2015)


I. Introduction

Photogrammetry is a relatively old practice of determining the geometric properties of an object from an image. Accurate depth and 3D representation of the geometry of that object or landscape can be generated from images of two or more overlapping perspectives. Measurements, contour lines and terrain maps; hillshade maps; water flow and accumulation maps; aspect, slope and other data can then be extracted through analysis of  the information contained within the  digital elevation model (DEM) generated through photogrammetry. The purpose of this article is to outline our experience with photogrammetry using the DJI Phantom Vision 2 Plus UAV.

II. Equipment

  • UAV

For landscape analysis UAVs are essential in that they allow in a convenient and cheap way for aerial photography. The drone that we are currently testing for these purposes is the DJI Phantom Vision 2 Plus.

Vision 2 Plus carries a 14 Megapixels FC200 camera with rolling shutter and heavily distorted 5 mm wide-angle lens attached on to a gimbal for stabilization and tilt. The tilt control allows for the camera to shoot at different angles during flight. The tilting range of the gimbal is from 0 to 90 degrees. The straight-down perspective is essential for capturing images for orthophotographic mozaic and generally for the photogrammetry uses we’re interested in. Setting flightpaths on pre-determined waypointis is also an option that is available with this model, and the the photograph taking process can be automated and set to various time intervals. The maximum flight time is about 25 minutes, but realistically it may be up to 15 minutes depending on environmental conditions.

Prior to acquiring this UAV we thought the photo camera supports both wide-angle and narrow settings, however we found this to be true for video-taking only  and not for the taking of photos.  But Pix4D Mapper gives good results even without prior lense correction.

  • Tablet / Phone

We use either Ipad mini or an Iphone / Android phone. The Ipad is easier to use because of the greater display, which makes setting up the grid and waypoint settings much easier. But with the recent release of the Pix4D Capture app, even a phone is easy to use for setting up the flight plan.

Update: You could also try the Drone Deploy App. Haven’t tested it but it seems good and judging by the demo it works well with the native camera.

III. Software

1. Photogrammetry software:

  • Free software:

VisualSFM: It’s used to generate sparse point-cloud models. It seems to have no constraints on the number of photos you can import, however upon file export you have to use additional software to end up with a file suitable for analysis. This may not always be a  completely a straight-forward process. It is usually used in conjunction with CMPMVS and Meshlab, please read on for more  information on this process.

CMPMVS: Renders dense point cloud data from VisualSFM export files.

–  Meshlab: Allows for the editing of 3D mesh and point clouds. Combined with the output of VisualSFM and CMPMVS it can render a mesh in high resolution.

  • Paid software:

Agisoft Photoscan: The standard edition costs around 180$, while the Professional edition costs up to 3600$.  Interface is user friendly and most of the processes are automated. Juding by the results we got from our extensive testing, we don’t recommend using Agisoft Photoscan with data from the DJI. Agisoft yields bad results with photos from the DJI Phantom camera, regardless of whether you use the original distorted photos or lense correct them to infinity. Results are always inconsistent, too unreliable, with broken up 3D meshes, and almost always distorted umbrella models and just plain bad end results.

Pix4D Mapper : This is really the go-to software for work with the DJI. Extremely pleased with the results.  Regardless of whether you use the lense corrected or the distorted photos, Pix4D Mapper yields excellent results with consistentcy and accuracy. Unlimited use license costs  8000 $, but subscription is recommended for indie companies. It also has good integration with the Pix4D Capture app which significantly increases accuracy and optimizes flights for good overlap.

2. GIS software:

  • Free software:

QGIS: This is the open-source, industry-standard GIS software. It can perform all of the essential analysis functions. It supports many plugins and the community is large.

  • Paid software:

ArcGIS: Most widely used GIS software.The community of professionals is large and the support is good.

3. Image editing software:

For our purposes it is used to carry out lens corrections. We have no extensive experience with free software here but what’s essential to have here is a tool that can process a batch of photos and automate the lense correction process without destroying the metadata embedded into the photos.

Adobe Lightroom 5: We use this software to edit our photos and make lense  corrections. Its most essential features  are that it has a lense correction profile specific to the DJI FC200 camera; that it allows for quick batch processing and cropping; and that it retains the GPS coordinates into the photos on export, without which a DEM model cannot be generated.

IV. Process for generating DEMs and extracting contour lines

1. Photograph the landscape you want to analyse

  • The area of interest you want to analyse should always be in the centre of the photographs.
  • If you’re using the DJI app, set a flightpath and altitude relative to both the area you have to cover and the ground so that you will have at least 60% overlap after the lense correction is done (see step 2 below).
  •  If you try to use Agisoft, which is just not good for working with DJI data, you have to take into account that there might need to be significant cropping of the edges,  so you have to take more photos in order to get that amount of overlap even after the cropping process is complete or you might lose significantly on 3D depth. Pix4D Mapper on the other hand gives excellent results even with bad flight paths and no prior lense correction.
  • You can try the new free Pix4D Capture App that makes photo-grabbing process easier and more optimized. Note that just like the DJI app, the Pix4D App allows only for area coverage that’s limited by the Phantom’s battery life and range.
  • 90 degrees camera angle, parallel to the ground is best.
  • Take photos in mid day when there’s less shadowcast and avoid high wind conditions in order to save on battery life and reduce motion blurring.
  • A basic do’s and dont’s tutorial can be found here. A manual by Agisoft can also be found here.

2. Lense correction

This step is only really needed if you use Agisoft Photoscan, and it’s highly unlikely that you’ll get any good results with it even  after you do the lense correction and cropping. Pix4D Mapper on the other hand gives excellent results even without prior lense correction of the photos. In fact, ocassionally we’ve had bad results with lense corrected images while using Pix4D; it does so much better with just the original images. But in case you need to do lense correction for Pix4D, you can use the process below, but don’t do any additional cropping of the images because that’s unnecessary with Pix4D Mapper.

Any lense less than 18 mm will have distortion that we don’t want. With the Phantom being mounted with a camera with a 5 mm lense, the photos are heavily distorted. This is not a problem in Pix4D Mapper, but in Agisoft without failure the resulting mesh that is s generated from these photos would have an umbrella shape to it if one looks at it from the side (see below). This distortion is then transfered to all the data that is extracted from the exported DEM, so that the contour data for example will inherit the distortion and become useless. Below is an example from survey done at the Happy Horses Estate in Bulgaria.

Wide-lense distortion

 

Distorted point-cloud model

Distorted point-cloud model

 

Lense correction is to a degree cropping the image. Having a lot of photos being taken, it’s not efficient to do this one by one. Another critical thing is to keep the metadata (GPS coordinates in this case) untouched. Lightroom 5 automates this whole process. To correct the distortion in Lightroom 5, we use the following process:

1. Open Lightroom and go to ‘File’ -> ‘Import Photos and Video’ (Ctrl + Shift + I)
2. Drag the photos you wish to process into the pop-up window and click ‘Import’
3. In the upper right corner click on ‘Develop’
4. Select one of the photos and on the menu to the right scroll down to the Lens Corrections tab and expand it
5. Choose ‘Profile’ -> ‘Enable Profile Corrections’. From the ‘Lense Profile’ menu in the ‘Make‘ tab choose ‘DJI‘. Leave everything else at default. You can play around with the distortion slider, but the default 100 has worked well for us.
6. Now select all of your photos (Ctrl + A) and from the menu on the right click on the Sync button at the bottom. In the pop-up menu click on ‘Check All’ and then ‘Synchronize‘. This will apply the correction to all of the photos.
7. Go to ‘File’ -> ‘Export’ (Ctrl + Shift + E). First choose where you want to save them. Then scroll down to the bottom until you reach the Metadata menu and under ‘Include‘ choose ‘All Metadata’. Also uncheck ‘Remove Location Info’. This will keep the GPS coordinates embedded in the photos so that a DEM can be extracted later.
8. Click ‘Export’

For comparison’s sake, the image below shows the point – cloud model generated with the same photos after being lense corrected.  (Forested areas (on the left) are usually always distorted because of lack of data due to the canopy).

Lense-corrected image in Lightroom 5

correction

Point-cloud model after lense correction

 

3. Process your photos through your photogrammery software and export a DEM file

  • In Pix4D Mapper Pro:

  1. Go to ‘Project’ -> ‘New Projects’ (Ctrl + N). Choose project name and where to save the final results. Click ‘Next’. In the subsequent popup window, choose ‘Add Images’ then nagivate to your photos and import them.  Click ‘Next’ a couple of times and then ‘Finish’. Your flightpath should now appear on the screen, overlayed onto a satellite map.
  2. Go to ‘Process’ -> ‘Options’ -> ‘Initial Processing’ tab -> ‘Feature Extraction’ -> from the dropdown menu choose ‘1 (Original Size)’.  Next go to ‘Point Cloud’ tab -> in ‘Image Scale’ dropdown menu choose ‘1 (Original Image Size)’. Next go to
  3. If you want Google Earth tiles in .kml format exported, go to ‘DSM and Orthomosaic’ tab and under ‘Orthomosaic’ section tick ‘Google Maps Tiles and KML’. Additionally if you want contour lines automatically exported for you go to the ‘Additional Outputs’ tab and choose the file type and set your contour interval. The ‘Index Calculator’ tab allows you set an export of reflectance maps. Once you’re done setting up, click ‘OK’. 
  4. Now simply click on ‘Start’ and the processing will begin. You’ll get quality reports at a few stages of the processing and you can use them to check if things are moving in the right direction or if you should stop the processing prematurely. Once Pix4D is finished processing, your final results will be automatically exported to the folder you specified in the beginning. Additionally, explore the tabs on the menus on the side as they allow you to visualize your results in different forms like point clouds, mozaics, height maps, 3D meshes, etc.
  • In Agisoft Photoscan :

  1. Go to ‘Workflow‘ and then ‘Add Photos’. Choose your desired photos and open them.
  2. Go to ‘Workflow’ and then ‘Align Photos’. In ‘Accuracy’ choose ‘High‘, and in ‘Pair selection’ choose either ‘Generic‘ or ‘Disabled‘ unless you have Ground Control Points, in which case choose ‘Ground Control‘. Click ‘OK
  3. Go to ‘Workflow‘ and then ‘Build Dense Cloud’.  You may have to experiment with the settings here depending on your computer system and the landscape you have captured. In ‘Quality‘ usual input is either ‘High‘ or ‘Ultra‘, and in ‘Depth Filtering’ use ‘Mild‘ for densely vegetated areas or ‘Aggressive‘ for less vegetated landscapes. Click ‘OK
  4. Go to ‘Workflow‘ and then ‘Build Mesh‘.  ‘Surface type’ choose ‘Height field‘, in ‘Source data‘ choose ‘Dense cloud‘, and in ‘Polygon count’ choose either ‘High‘ or a custom number you can experiment with. Put ‘Interpolation‘ on ‘Enabled‘. Click ‘OK
  5. After the mesh is generated you can build your texture by going to ‘Workflow‘ and then ‘Build Texture
  6. Export the DEM by going to ‘File‘ and then ‘Export DEM’. In ‘Projection type‘ choose ‘Geographic‘. Choose  a Coordinate system from the dropdown menu, usually it is ‘WGS 84 (EPSG: :4326)‘. Choose ‘Crop invalid DEM’ and then click on ‘Export‘.

A tutorial by Agisoft can be found here. And another one by MAPSandGIS can be found here.

  • For VisualSFM, you will have to combine it with CMPMVS and MeshLab. A good tutorial on this process can be found here and here. Another tutorial by digital composer Jesse Spielman on his experience with VisualSFM in conjunction with Meshlab can be found here.

4. Import the DEM file into your GIS software.

  • In QGIS:
  1. Use the browser on the left side of the window to find your DEM file and then open it. Alternatively, just drag and drop it in the viewport.
  2. Go to ‘Raster’ -> ‘Extraction’ -> ‘Contours’
  3. In ‘Input file (raster)’ choose your DEM file
  4. In ‘Output file for contour lines (vector)‘ choose a location for the contour data to be saved.
  5. In ‘Interval between contour lines’ choose your desired contour spacing. Note, you may want to change the measuring system used by the software in the program settings (this is in Settings -> Options in the software menus).
  6. Leave the other settings on default and click ‘OK‘. Your contours are generating and you should get a pop-up window to inform you when they are ready.
  7. To smooth out your contours and clean up the noise from the DEM, watch this tutorial from Joanna Mardas. You’ll need to download the Generalizer plug-in from the repository from within QGIS.
  • In ArcMAP (Note: you will need the Spatial Analyst extension for this):
  1. Load your DEM file from ‘File’ -> ‘Add Data’ -> ‘Add Data’. Alternatively, just drag and drop it in the viewport.
  2. If the file has multiple bands, load all of them. But afterwards from the layer menu a leave only one of them visible.
  3. Go to the search tab in the far left of the software and type in ‘Contours‘. Open the tool ‘Contour (Spatial Analyst)’
  4. In ‘Raster Input‘ choose the layer of your DEM file, or the band that you left visible in step 2. In ‘Output Polyline Features’ choose a file name and a place to store the contour information.
  5. In ‘Contor Interval‘ choose your desired contour spacing.
  6. Leave the other settings on default and click ‘OK‘. Your contours are generating and you should get a pop-up window to inform you when they are ready.
  7. To smooth out the contours and clean up the DEM noise, check this tutorial by Pix4D.
  • Forested or densely vegetated (40% or more) areas are a bit problematic for DEM generation and contour extraction through photogrammetry. After extracting the DEM you can connect individual points from the raster cells of clearings within the canopy and then interpolate them in order to get a relatively accurate contour visualisation. Here is a reference to this approachadditionally check the results section below for an example.

V. Examples

      1. Village in Strandzha mountain

  • 4 ha
  • 190 photos
  • Grid flight using Pix4D Capture for optimized overlap
  • Lense corrected with Lightroom DJI Profile. No further cropping.
  • Processed with Pix4D Mapper
  • Analysed with QGIS

    2. Solnik

  • ~ 30 ha
  • 170 photos
  • Grid flight using Pix4D Capture for optimized overlap
  • Lense corrected with Lightroom DJI Profile. No further cropping.
  • Processed with Pix4D Mapper
  • Analysed with QGIS
  • Point interpolation for the contours in the forested areas

    3. Field

  • ~ 2 ha
  • 75 photos
  • Grid flight using Pix4D Capture
  • No lense correction
  • No cropping
  • Processed with Pix4D Mapper
  • Analysed with QGIS

    4. Village

  • 100 Ha
  • 290 Photos
  • Bad flight path (not enough battery life for a grid)
  • No lense correction
  • Processed with Pix4D
  • Analysed with ArcGIS
  • Processing time: 19h:05m:14s
  • This was really a test to see how well Pix4DM works with bad inputs. Impressive

    5. Quarry

  • 4.5 Ha
  • 175 Photos
  • Processing time: 12h
  • App used: DJI iOS application
  • Processed with Pix4DM
  • No prior lense correction

VI. Conclusion

We hope this article has proven helpful to other people and has provided an overview of working with the DJI Phantom Vision 2 Plus. This UAV, although not ideal for terrain mapping, is a good start for photogrammetry in general. It is more suited to smaller properties (up to 15 Ha max from our testing). The main criticism on our part is that it’s not been designed for easy modification specifically of the gimbal and payload. Other than that, it’s great value for money in that it is user-friendly and full of features.

If you have any questions or  feedback that you’d like to share please use the comment section or contact us.

 

SUBSCRIBE TO RECEIVE UPDATES

45 Responses to Photogrammetry with the DJI Phantom Vision 2 Plus

  1. Rodrigo Quiros September 28, 2014 at 6:16 pm #

    Amazing article, well researched.

  2. Paulo Silva November 27, 2014 at 4:15 pm #

    Hello
    im in need of some help and hoping you can assist me, im using the same uav has you are , the phantom 2 vision plus, followed you tut. and works great, the only thing that bothers me is when i put contour labels on my contour lines it shows me negative values the same that generated on the dem, it seems to me that agisoft is assuming that the position of the uav is “0” height and everything below is in negative values, how do i correct this? How do i tell agisoft were the ground is so it makes the dem with the correct heights??
    thanks in advance
    Paulo Silva

    • admin December 4, 2014 at 7:48 pm #

      Hello,

      Thanks for writing.

      From our experience, the problem is not within Agisoft but with the DJI sensors and that there is no embedded Altitude parameter within the photos with which Agisoft then works to create the DEM. You can manually adjust the GPS coordinates of the camera position of your photos by going into Ground Control tab in Agisoft. Therein at the top left are listed each of your photos, and each one has coordinates and Altitude. The photos from the DJI drone by default show an altitude of zero. Adjusting the Altitude parameter to the appropriate height from which the photos were taken might solve the problem.

      In all honesty, the DJI drone is not suitable for mapping for this and other reasons and we’ve moved away from it. This article is due to be rewritten and updated soon.

      Kind regards

      • isaac December 5, 2014 at 11:09 am #

        Great post. I am also having issues with the limitations of the P2V+. Which UAV platform camera combination did you end up moving to?

        • admin December 5, 2014 at 3:46 pm #

          We’ve moved to a Skywalker fixed-wing framework and a modified Canon SX260.

  3. Antonio December 18, 2014 at 7:06 pm #

    Truly Amazing article! Thank you very much!

  4. Danny December 22, 2014 at 9:53 am #

    Fantastic and we’ll done work. Thank you for sharing with the world. Much appreciated.

  5. Jason Pretorius January 5, 2015 at 2:40 pm #

    Nice article.

    Have you managed to do any testing with the Piksi RTK GPS yet? I am looking to integrate their system on my UAV, and was wondering if it is possible to reference aerial imagery directly from the GPS therefore eliminating the need to to post flight ground truthing.

    I have e-mailed Swift Navigation asking them if Piksi can be integrated with the CHDK capability of Pixhawk Autopilot, allowing each image to be referenced as it is taken. I have not received any response from them.

    Do you have any idea if this is even possible.

    Thanks.

    Regards,

    • admin January 5, 2015 at 3:15 pm #

      Hello, thanks for writing.

      Update:

      We’ve not tested the Piksi yet. However, if I’ve understood you correctly, what you’re intending to do is attempt to make the camera position as it is in the air be it’s own reference point for accuracy. This sort of approach lowers accuracy as compared to ground control point approach, but you could still get to 3 cm accuracy (as opposed to 1.5 cm or less with GCP). After a bit of research I’m still not certain on the integration with Pixhawk. Perhaps you can also check with diydrones.com to see if it’s been done. The only company I’m aware of that have made this work is senseFly’s RTK eBee that can work independent of ground-based RTK GCP.

      https://www.sensefly.com/drones/ebee-rtk.html

      Cost: ~ 17 – 25 000 Euros

      Update 01/24/2015

      Another company uses the same approach with their UAV. You can check in here:

      http://www.mavinci.de/en/siriuspro

      Cost: 35 – 45 000 Euros (The quote we got was for 38 000 EUR, but it seems to be client specific)

      Kind regards

  6. Gautier February 19, 2015 at 9:38 pm #

    Following you work Georgi, and waiting for client interest to grow here as to be able one day take advantage of this type of technology (whether it be me or others investing into it). Looking forward to further developments.

  7. Ballesteros Espín Adolfo Luis February 23, 2015 at 5:50 am #

    I apologize for sending my contribution of only 50 US Dollars, I live in Ecuador, my income is not very big, but I think companies like Huma, who unselfishly give very much information about current research, deserve support of all who visit your site web, because the information is very important and valuable, and accurate, also in Ecuador am very committed to the environment and I’m doing a study for my community that I would to share with you what by, I think recover some resources later, and certainly something I will share with you, and I’m thinking we could make a trade for a visit of you to my country or my person or my son, David Ballesteros Medina to his country, to exchange experiences, we are working with phantom DJI 2+ with its original vision camera, the information you put in very important for my small business that is starting, I’ll be bothering them for advice on applications, some things I have experienced, but the experience yours will be invaluable for me
    Sincerely.

    Adolfo Ballesteros Espín
    David Ballesteros Medina

    • admin March 4, 2015 at 3:13 pm #

      Thank you Adolfo for your generosity!

      Regards

  8. Antonio March 20, 2015 at 9:36 pm #

    Hello everyone .
    I have a problem of distortion in the shape of umbrella in point clouds . I straightened images using photoshop lens cover . but unfortunately the image remains highly distorted .
    someone can help me solve this annoying problem … thanks

    • admin March 21, 2015 at 12:09 am #

      Hi Antonio. It’s hard to suggest anything without seeing some before / after screenshots, also some for your lense correction settings. You can write us an email through the contacts page.

  9. Brad Baker March 21, 2015 at 8:47 pm #

    A couple of months ago we obtained the Phantom2 Vision Plus. I am looking forward to using this article to teach photogrammetry. I do have one equipment starter question. In the article you mention Android and ISO apps so I assume that is how you are getting the pics, is there any advantage to adding the DJI datalink and CAN module to the Phantom2V+ so you can use the photogrammetry section of the PC version of Ground Station to collect the photos. This seems to be a more systematic way to make sure I always get the 60% coverage you recommend. This is really a money question I am trying to figure out if should spend the money on the datalink/CAN or on concentrate more on the software you recommend instead.

    • admin March 21, 2015 at 10:55 pm #

      Hi Brad, thanks for writing. Unfortunately the Phantom Vision 2 Plus does not support integration with the DJI datalink and ground station. We tried everything to make it work but it’s only meant for the earlier model Vision 2; it’s impossible to connect it to the Vision 2 PLUS without soldering work etc that could damage the system. It’s such a shame and irony that DJI would not make it work with the newer model. What’s worse is they haven’t made that clear anywhere in their documentation or the web, people end up buying the ground station hardware in vain. For these reasons you’re pretty much stuck with the apps, of which the Pix4D for Android is the best.

      Best regards

      • Brad Baker March 22, 2015 at 3:46 am #

        Who knew, buying an older Phantom2 was a better way to go for our purposes. That is exactly what I was afraid of, you just saved me a lot of time and money and headaches. At least finding this webpage will help push the program over into DEM data collection and processing. We will just have to get by doing manual photo grabs with the iso/android apps for the time being and enjoy the real time advantages of the phantom2V+.

        • admin March 22, 2015 at 12:43 pm #

          The apps allow you to set automatic photo-grabbing and waypoints route (only the Pix4D app allows for automatic grid route, the DJI app is pretty bad in this regard). You can estimate how quickly to set the time relative to how large the area is to how high you fly so that you get 60% coverage. But generally, don’t worry too much about that. Just set the timer for the photos to the quickest it can be (every 2 sec if I remember) and simply make a quick judgement as to how high you have to fly so as to get reasonable coverage.

  10. Paul Powton April 21, 2015 at 10:47 pm #

    OK New Phantom 3 just being released with 94degrees camera @ 12mp with larger sensor.
    See Link :- http://www.dji.com/product/phantom-3/professional-camera
    I currently own & fly the Phantom2Vision+ but camera is way too wide for serious applications – this one however require no post processing & seems to be ( possibly ) what we were all waiting on?
    Included Lightbridge package which gives real time video / pic streaming – quite impressive!
    Still will only get 20mins average flying time in decent weather but………………….
    Opinions would be appreciated.
    Thnaks

    • admin April 24, 2015 at 11:33 pm #

      I’d wait to see some feedback by others and examples of the camera and functionality overall. I’m thinking though that the camera also might just have the 35 mm lense (simulated through automatic cropping perhaps) only for the video taking similar to how it is on the earlier models. Flight time is indeed really low. I’ll never invested again in a UAV that has less than 40 mins flight time, if you intend to work on anything larger than say 20 – 60 ha.

  11. Vova April 30, 2015 at 12:02 pm #

    Excellent article!! Thanks for sharing your knowledge and experience

  12. Darren May 5, 2015 at 9:59 am #

    This article is incredibly helpful and very well put together, thank you for the detailed information! My business offers aerial video and photography services for marketing and we’re in the process of implementing a mapping and survey division with UAVs. We currently utilize the Phantom 2 w/ H4-3D Gimbal, 2.4ghz downlink for autonomous flight plans via waypoint, and and a GoPro modified with a flat 5.5mm lens. So far this setup seems to be pretty solid for the investment (around $3k all together with remote controller upgrade).

    I would love to get your thoughts on:
    1. The setup I just mentioned and any improvements or issues that you might be aware of regarding it.
    2. The current best software for processing images for Topo maps.
    3. The specifics on the setup that you recommend to product the best Topo maps with a UAV.

    Thanks again for the great article and best of luck with your work!
    -Darren

    • admin May 5, 2015 at 10:40 am #

      Hi Darren,

      1. Your setup is okay. Depending on the scale you want to work on though the Phantom may not have enough battery life for comfortable work. We’ve managed to cover a maximum of 100 Ha with one battery and not ideal flight path (not a grid), but the resulting maps were okay. I’ll update the article today with that specific flight. For accuracy you’d need a good GPS (RTK) for setting up ground control points.
      2. Pix4D Mapper Pro, hands down the best software on the market. And with QGIS you can analyze the exported Digital Surface Models from Pix4D and generate topo maps.
      3. Can you elaborate on this question please?

      Best wishes

      Edit: Article is updated now with more examples

  13. Darren May 6, 2015 at 8:38 pm #

    Super helpful! Thank you.
    To elaborate on the 3rd question: what type of UAV are you using for mapping now? Camera, gimbal, downlink, ground station, etc. Just curious on the setup that you’ve had the best success with.
    I’ll definitely look in to Pix4D.
    Lastly, is it feasible to cover more than 100 Ha w/ the Phantom 2 setup we’re currently working with by simply landing the craft, changing out the battery, and resuming flight?
    Thanks again!

    • admin May 7, 2015 at 3:46 am #

      http://www.event38.com/ProductDetails.asp?ProductCode=E384

      This UAV is in a similar price range as DJI and can cover more than 400 Ha at a time, and is easier to work with professionally. It was recommended to us by professionals with years of experience (the guys from DroneMapper.com). And this is the ideal setup.

      Telemetry: RDF900 + Long range
      Camera: Canon SX260 with CHDK. You can also easily modify them to NIR
      Intellishoot: for optimized overlap

      The rest is extra.

      Sure, you can cover more than 100 Ha if you take the landscape in portions. It’s roughly about 30-50 Ha per battery for a decent grid flight and overlap; you’d also have to fly higher to cover more. A distorted lense can also help cover more, as you can see from the examples. The 100 Ha flight relied on the distorted lense for the detail on the sides. I could have captured that landscape in two flights but I had no extra battery left (too many projects in one day).

  14. Darren May 8, 2015 at 8:42 am #

    Again, super helpful!

    Keep up the good work and good luck with your flights!

  15. chris May 14, 2015 at 7:21 pm #

    wow !
    many thanks for this great post. It may help me to elaborate my little project.

  16. Michael July 9, 2015 at 2:33 pm #

    Hi there, great post, thinking of getting a DJI Phantom 2 Vision+ for my fieldwork. I have a few questions though. Can you manually set the drone to take continuous pictures every few seconds without using the PIX4D app? I shall be doing fieldwork in a fairly remote place without internet connection. Additionally, can you cache maps using PIX4D?

    • admin July 9, 2015 at 4:22 pm #

      Hi

      You can set the continous picture taking without the Pix4D App, using the standard DJI app. In that regard, the app is quite good actually. You really need the internet connection (we use 3G), cause at the moment both apps cannot cache maps or waypoint routes unfortunately.

      Regards

  17. Johnny July 16, 2015 at 12:13 pm #

    it is a very nice tutorial thanks for sharing it.

  18. Gary Wilson August 11, 2015 at 8:01 pm #

    Do you have any thoughts on using the DJI Matrice 100 for photogrammetry? It seems like it offers a much more customizable solution than other products.

    • admin August 12, 2015 at 7:51 am #

      Hi Gary, thanks for writing.

      For starters, I really like the direction they are taking with this. Finally they start realizing this market has plenty of professionals looking for professional equipment and not just hobbyists. That being said, things are still very immature development-wise. I expect in the next 18 months this technology to kick off a lot more and prices to drop.

      The Matrice is a bit overpriced in my opinion. Depending on the scale you want to work with, there might be better options. For large scale, the e38 still seems better.

  19. chaney September 3, 2015 at 9:03 pm #

    Thanks so much. This was so much fun to test.

    Any advice on improving Photogrammetry for smaller indoor objects – human heads and props – with a single DSLR camera? I think it would require a finer grain position sensor like the Piksi but ideally a system that does not rely on gps as that may not work well indoors.

    • admin September 3, 2015 at 9:19 pm #

      Hi, thanks for writing.

      You don’t need GPS for photogrammetry. If you can devise a way to take photos around the object in a consistent way (distance and axis), you’re good to go. But even then, there’s a lot of room for compromise with that as well. Just test it to see what lighting condition works best, because depth is what ultimately generates the accuracy of the shapes.

      • chaney September 4, 2015 at 12:03 am #

        I think the cylindrical configuration would work well for the example I gave but I should have given broader examples. Imagine a room that needs to be modeled and cannot be put on a turntable. I imagine walking around and snapping a bunch of picks. I would think that the photogrammetry “solve” would work much better if each pic’s location and orientation was provided in a similar way that Pix4d needs the geo tag.

        Thanks

  20. cristian October 14, 2015 at 2:28 am #

    thanx for the tutoooooooo .hi i need help , I have this error,

    Failed to find two images for initialization <—????
    Resuming SfM finished, 1 sec used

    —————————————————
    2 cams, 75 pts (3+: 0) 2.586 (17 LMs in 0.14sec)
    Focal Length : [5376.000]->[5388.612]
    Focal Length : [5376.000]->[5371.886]
    END: No more images to add [0 projs]

    #############################
    Initialize with DSC_7554 and DSC_7555
    74 3D points initialized from two images
    PBA: 74 3D pts, 2 cams and 148 projs…
    PBA: 5.590 -> 3.201 (91 LMs in 0.93sec)
    Focal Length : [5376.000]->[6729.170]
    Focal Length : [5376.000]->[9066.411]
    END: No more images to add [0 projs]

    #############################
    Failed to find two images for initialization
    Resuming SfM finished, 1 sec used

    —————————————————
    2 cams, 75 pts (3+: 0)
    148 projections (3+: 0)
    —————————————————
    2 model(s) reconstructed from 18 images;
    4 modeled; 0 reused; 18 EXIF;
    1MB(1) used to store feature location.
    —————————————————

    ########——-timing——#########
    Structure-From-Motion finished, 1 sec used
    1.1(1.1) seconds on Bundle Adjustment (+)
    1.1(1.1) seconds on Bundle Adjustment (*)
    #############################
    —————————————————————-
    Run full 3D reconstruction, finished
    Totally 1.000 seconds used

    • admin October 14, 2015 at 8:20 am #

      Hi,

      It seems that either:
      1) You are trying to use just 2 images for construction, which might not be enough ( I think 4 is minimum ). If so, try adding more images.
      or
      2) It cannot find the pathway to the said 2 images. If so, try loading the images from a different location on the hard drive.

      Regards

  21. cristian October 14, 2015 at 9:39 am #

    My post was cutoff ,let me refresh.

    There are 68 images ,the strange thing is that I tried with other Internet pictures and it works perfect, in fact, I did previously my own model and it worked fine.
      Another thing I tried was upload on Autodesk memento, and memento worked but not completely, I formed half of the object

      so I think the problem is in these 64 images specifically,

    clearly i did wrong taking these photos, but what I would like to know what is it

    I’m using Nikon D300s daylight pictures, no flash and I’m taking three pictures by step, 0 degrees, 45 degrees and -45 degrees.

    So I do not know, perhaps i’m missing data Photo when exported, or perhaps i have a problem with the camera

    nevertheless thank you ¡¡¡¡¡, very complete tutorial.
      Another interesting method that saw out there is to connect a barcode reader with the camera, to be more precise distance data

  22. remi December 29, 2015 at 7:02 am #

    hi,

    i have a question, is images captured by dji phantom geo-tagged?

    • admin December 29, 2015 at 2:35 pm #

      Hi,

      The images captured by the dji pv2+ are georeferenced.

  23. Ferrarez February 19, 2016 at 6:18 pm #

    Good afternoon!
    Would like to acquire the Pix4D software , I need to know the software and hardware requirements to buy a computer , use one 12Mpxl camera to take 120 photos

  24. Ricardo Hernandez Alfaro June 24, 2016 at 8:22 pm #

    Thanks a lot for this post. It so helpfull.
    Uno question:
    Exist other form to made this work without the Vision +, I have a Phantom 2 without this feature.

  25. Ricardo Hernandez Alfaro June 25, 2016 at 2:48 am #

    In the example 5. “Quarry”. How many meters high from the ground needed the dron to take the Photos.
    Thanks.

Trackbacks/Pingbacks

  1. Extracting height maps and visualising contours from Google data | HUMA - February 18, 2015

    […] of Photoshop into QGIS or ArcGIS and generate contours that way. For a reference on that process check this article and step IV.4. The heightmap serves the same basic functions as that of a DEM. Note that because it’s […]

  2. The Essential Guide To Landscape Photography 3rd Edition Download - November 16, 2015

    […] Photogrammetry with the DJI Phantom … – This article has been revised and updated. A new step has been included to guide into lense correction. Another guide has also been included on using Pix4D … […]

Leave a Reply