Is there a way to configure QGIS to prevent the st_extent call when adding a PostGIS table?
I'm trying to add an extremely large table with a points geometry field to QGIS and QGIS appears to hang. After some investigating in the postgresql database, I tracked it down to a
st_extenton everything in the table.
Is it correct to assume that the reason QGIS does this is in order to obtain a full extent for the layer? And if so, is there a way to bypass this? Maybe either use the spatial reference extent or manually provide the extent?
QGIS 2.6 64-bit
POSTGIS="2.1.3 r12547" GEOS="3.4.2-CAPI-1.8.2 r3924" PROJ="Rel. 4.8.0, 6 March 2012" GDAL="GDAL 1.10.0, released 2013/04/24" LIBXML="2.7.8" LIBJSON="UNKNOWN" TOPOLOGY RASTER
PostgreSQL 9.3.5, compiled by Visual C++ build 1600, 64-bit
Have a look at the settings of PostGIS connection. There is a selection "Use estimated metadata" that is unselected by default. Check it and QGIS will use ST_Estimated_Extent http://postgis.net/docs/manual-2.0/ST_Estimated_Extent.html. Remember to run VACUUM ANALYZE before.
The "use estimated metadata" option on the connection works for newly created layers, but it doesn't change it on existing layers that connect to that database, nor did I find any way to change it in the layer properties (I'm using 3.8.3). I found it relatively quick to change all my existing layers by editing the QGS file to add
estimatedmetadata=trueto all PostGIS data source descriptions.
Here are the steps I followed using the Linux terminal. Similar steps can be done in any OS, as long as you have a way to search and replace text.
Extract QGS file from the QGZ:
$ unzip map.qgz map.qgs
estimatedmetadata=trueto all PostGIS data source descriptions in the QGS. In my particular file, I found these in XML
For my particular file, I checked and saw that all such lines had no
estimatedmetadata=, so rather than switching
estimatedmetadata=true, I had to add in a new
estimatedmetadata=true. I looked for a string that was in the middle of all of the data source descriptions, so I could do the equivalent of a search and insert by using a text replace. The string I found in my file was
checkPrimaryKeyUnicity=, so I ran the following command:
$ sed -i 's/checkPrimaryKeyUnicity/estimatedmetadata=true checkPrimaryKeyUnicity/g' map.qgs
You may have to adapt this to the details of your specific file, make the changes in a more manual way, or use a tool with XML-specific editing features.
Replace the QGS file in the QGZ:
$ zip -u map.qgz map.qgs
Store and visualize your raster in the Cloud with COG and QGIS
We have recently been working for the French Space Agency ( CNES ) who needed to store and visualize satellite rasters in a cloud platform. They want to access the image raw data, with no transformation, in order to fullfill deep analysis like instrument calibration. Using classic cartographic server standard like WMS or TMS is not an option because those services transform datasets in already rendered tiles.
We chose to use a quite recent format managed by GDAL, the COG (Cloud Optimize Geotiff) and target OVH cloud platform for it provides OpenStack, a open source cloud computing platform.
This code makes a window with the conditions that the user cannot change the dimensions of the Tk() window, and also disables the maximise button.
Within the program you can change the window dimensions with @Carpetsmoker's answer, or by doing this:
It should be fairly easy for you to implement that into your code. :)
You can use the minsize and maxsize to set a minimum & maximum size, for example:
Will give your window a fixed width & height of 666 pixels.
Will make sure your window is always at least 666 pixels large, but the user can still expand the window.
Installation of QGIS
You can download QGIS from the following link  using the green Download Now button. Choose your platform: Windows, Mac OS X, Linux, BSD, or Android from the various drop-down menus on the download page (Windows will be used for this tutorial). If you have Windows, check your specifications, as either you will use 32 bit or 64 bit. Select the latest standalone version and download the .exe installer. Once downloaded, go to the location the .exe installer downloaded to, run it, and use the Setup Wizard that appears to configure the program.
Once QGIS is fully installed onto your computer, run it either via the desktop icon, or the start-menu launcher. Once opened, a program window like figure should appear. To create a project file for our project, click Project in the upper left corner, and Save As. Browse to the folder created in the earlier step, give the project a name, like Georeferencing Tutorial, and hit Save.
How to prevent tensorflow from allocating the totality of a GPU memory?
I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.
For small to moderate size models, the 12 GB of the Titan X is usually enough for 2–3 people to run training concurrently on the same GPU. If the models are small enough that a single model does not take full advantage of all the computational units of the GPU, this can actually result in a speedup compared with running one training process after the other. Even in cases where the concurrent access to the GPU does slow down the individual training time, it is still nice to have the flexibility of having multiple users simultaneously train on the GPU.
The problem with TensorFlow is that, by default, it allocates the full amount of available GPU memory when it is launched. Even for a small two-layer neural network, I see that all 12 GB of the GPU memory is used up.
Is there a way to make TensorFlow only allocate, say, 4 GB of GPU memory, if one knows that this is enough for a given model?
A.5. Release 3.0.0rc2
If compiling with PostgreSQL+JIT, LLVM >= 6 is required
Supported PostgreSQL versions for this release are: PostgreSQL 9.5 - PostgreSQL 12 GEOS >= 3.6. Additional features enabled if you running Proj6+ and/or PostgreSQL 12. Performance enhancements if running GEOS 3.8+
4534, Fix leak in lwcurvepoly_from_wkb_state (Raúl Marín)
4536, Fix leak in lwcollection_from_wkb_state (Raúl Marín)
4537, Fix leak in WKT collection parser (Raúl Marín)
4535, WKB: Avoid buffer overflow (Raúl Marín)
Friday, 27 February 2015
Remote sensing - What should come first-Radiometric calibration or co-registration?
For a pair of images,in the pre-processing of satellite images, which step should come first- Radiometric calibration or co-registration?
Suppose that we have two images that we want to co-register or one image that we want to register to earth:
First step is to remove the errors in each image both geometrically and radiometrically. Each image has some geometric errors due to:
- Earth rotation
- Scan time skew
- Aspect ratio
- Panoramic effect (bowtie error)
- Earth curvature
These errors will cause the pixels to drift during image acquisition and so will effect radiometric information. So when we are removing (transferring the pixels to their correct position in image) these geometric errors, we should do radiometric interpolation, too. Radiometric interpolation can be done through:
Also if the two images have different sizes, we should resize them in this step through the above interpolation techniques.
Second step is to co-register (determining the mathematical transformation between two image) the images. This can be done through different ways. One of them is to register both images to earth. When both images are registered to earth (the same reference system), they'll be coregistered to each other.
Different mathematical models are used for registeration based on different factors including the type of the sensor that is used to acquire the images. One of them that is used in HRSI images is terrain independent RPC coefficients
Thus we always remove the errors in each image first (calibration of each image) and then co-register them. This is true for all kinds of images in remote sensing including PolSAR, InSAR, Hyperspectral and Multispectral images.
Two images are co-registered when both of them are free of errors
Coordinate system - Waterman butterfly projection in Mapnik
Like the title says, how would someone configure Mapnik to use the Waterman butterfly projection ?
Otherwise, what other tools would be able to render using this projection ?
I don't think mapnik or proj4 are able to render that kind of projection.
According to that excellent post, Openlayers with protovis library would be able to render not exactly the Waterman projection but the Fuller projection (also called Dymaxion).
You even have an online example here.
Pyqgis - Printing centered map from a QGIS project for each point in shapefile?
I need to produce on the order of 100 maps that are centered on each point of interest in a shapefile. I would like to prepare all the layers in a master QGIS project, and set up the composition for one point (so that printing 100 maps could be done manually, if need be).
I'd have something like the following layers:
And I would like to then automate printing to svg something like:
- For each point in a shapefile
- Center the map canvas on that point
I'm reasonably certain I know how to do 1 & 2, but haven't found details on 3 & 4 on this site.
- In the print composer, enable atlas and use point layer as atlas coverage layer.
- Set the map item to be controlled by atlas, and choose the fixed scale
Back in the QGIS's main window, for each layer that you want to filter according to a certain distance to the point use the rule based symbology and use the following rule
within($geometry, buffer(@atlas_geometry, distance))
How to get feature for a given bbox from shapefile by ogr
I have some features point,line,polygon with shapefile format, now I want to get features for a given boundbox, is it possible to remove the features out of the bbox?
And for a polygon, I think it is necessary to close it.
I wonder if this is possible?
Postgis - Finding points along a path
I am unsure of the best approach to this problem. I can think of some ways of doing it which I will list below but I am looking for the best and most efficient way of doing this.
Given a variable number of points, create a "route" or line between the points.
Note: The radius does not have to be circles. A,B,C,D can use rectangular bounding boxes.
- Table with points (x,y - technically lat/lon)
- Input points to create a route
We are using PostgreSQL (8.4) and PostGIS (1.3.6) and Python.
This is the solution I came up with. Thoughts? Ideas? Input?
- Create three polygon "tubes" (A->B, B->C, C->D)
- Filter points to only those in the polygons.
- Subtract B->C from A->B so the polygons do not overlap (no duplicates)
- Subtract C->D from B->C so the polygons do not overlap (no duplicates)
My approach was naive, after taking some time and looking at the PostGIS methods I came up with this single SQL call. Note its specific to our database but might help someone in the future. Im sure it could be improved as well.
I had to project the data because I am using PostGIS 1.3.4 which doesnt support Geography type.
Basically what I am doing is I am using ST_MakeLine and a query to locate "aerodromes" and return their geometry.
I had to order them (using the CASE directive) so that the line would be connected in the right order.
I then project and buffer this line to create a Polygon that I can then use to see what other aerodromes intersect with the buffered polygon.
Using the unbuffered line (route) and the call ST_Line_Locate_Line I then order the discovered aerodromes as they appear appear along the path.
You need two postgis functions ST_Buffer and ST_Line_Locate_Point .
Access Violation in ArcObject Multi-threaded Application
I think I have figured it out. The most probable cause of exception in Multithreaded application may be:
When both threads trying to open same featureclass it breaks: coz one of the thread is already in the process of opening it and other tries to do the same, So it gives error like:
"Memory could not be read or write from protected memory"
This is my guess, but what wonders me is that why there is no separation of concern, even if I am opening a different workspace in each thread. May be Arcobjects internally, looks for same address space for any object in database. I have written few lines of code and tried to run it with different threads opening different object classes and this also breaks but
whenver, I deligate few thread to open same featureclass it breaks with the above errors.
Also, not releasing memory is the other cause for such errors as well.
Adding some code for more clarification:
Above code breaks with the said exception most of the time.
@AndOne, I am using oracle spatial direct connect. Also, I created a new feature class and ran the same code against it, this also fails when threads are increased. There are few differences which I could figure out between these two featureclasses:
Sql - PostGIS Intersection and Summarise Attributes
I am usually an ArcGIS desktop user, but keen to start using PostGIS more and I have a really big bit of processing to do. Not sure what functions to use, hopefully someone can help.
I have a polygon dataset (several million features), based on a type of landuse/ landcover classfication (20 categories). I have a number of regions in another dataset.
For each of the regions, I would like to know the area of each landcover classfication.
In ArcGIS (if it was a smaller dataset) I would imagine first adding the region to each of the polygons in the attribute table using a join. Then using "summarize" on the table by region and by landcover classification.
Not sure where to start doing this in PostGIS / SQL.
Wow thanks that has been a huge help.
It has been running a long time (44 hours!) and now I get:
I assume this is a problem in the original data - just a case of reviewing the original data or can I first check the topology some how for the whole data? Is there something about accepting certain errors / processing tolerances?
Assuming you have the following table layout
Area values per landcover type per region can be calculated using
ST_Intersection is used to account for landcover polygons that are only partially within a region.
Arcgis desktop - How to make a route event layer?
I'm using ArcGis 10.1 but I'm a beginner in this program.
I want to make a route event layer based on a route I have already defined. My question is in the third line when the program ask me for an input event table. I read a couple of explanations about this tool but any of them explains what table is this. Do I have to create an excel table? If yes, how does it have to look like?
You need to supply a table. This can be a .dbf file, a geodatabase table, or a sheet of an Excel file. I'd recommend exporting Excel/.dbf to a geodatabase table to make sure the field data types are converted properly.
After you've supplied the table, you will need to provide several fields (basically map your input table fields to the required in-fields). You can read what those fields mean here at the Make Route Event Layer (Linear Referencing) help page.
If you are new to Linear Referencing, consider going through ArcTutor tutorial which is shipped with your ArcGIS media. You can also download it from the Esri Customers Care portal.
Qgis - Upright/Horizontal labels when labeling circular polygons at perimeter?
I have several polygons (circles) I want to label with their ID number. The label should be outside the circles so I use the positioning options "Use perimeter" and the tickbox "Next to the line (or similar, my qgis talk spanish. ).
QGIS now automatically aligns the labels with the curvature of the line. Is there an option or a way to make them oriented horizontally?
3 Answers 3
While @sysdmin1138 answer was correct it's worth mentioning that changing the scope is not the only reason why things are missing from the view. There are things that invisible by default.
Some objects such as physicalDeliveryOfficeName are hidden from view so you can't delegate them easily. A lot of other attributes are also hidden, but physicalDeliveryOfficeName is very specific and can be good example on how things works for Delegation.
The Per-Property Permissions tab for a user object that you view through Active Directory Users and Computers may not display every property of the user object. This is because the user interface for access control filters out object and property types to make the list easier to manage. While the properties of an object are defined in the schema, the list of filtered properties that are displayed is stored in the Dssec.dat file that is located in the %systemroot%System32 folder on all domain controllers. You can edit the entries for an object in the file to display the filtered properties through the user interface.
A filtered property looks like this in the Dssec.dat file:
To display the read and write permissions for a property of an object, you can edit the filter value to display one or both of the permissions. To display both the read and write permissions for a property, change the value to zero (0):
To display only the write permission for a property, change the value to 1:
To display only the read permissions for a property, change the value to 2:
After you edit the Dssec.dat file, you must quit and restart Active Directory Users and Computers to see the properties that are no longer filtered. The file is also machine specific so changing it on one machine doesn’t update all others. It’s up to you whether you want it visible everywhere or not.
Full story about physicalDeliveryOfficeName and how to change it with screenshots can be read at my blog.
PS1. Since physicalDeliveryOfficeName is special case, after modifying this setting look for Read/Write Office Location. Unfortunately the name physicalDeliveryOfficeName never shows up.
PS2. Unless those settings are uncovered by modifying dssec.dat you won't be able to see them. Since this file is per computer it's entirely possible it's visible on some computers and not visible on others depending whether someone made the change earlier or not. This could explain why you could see it before and not later on.
PS3. Sorry for resurrection but just spent few hours trying to find the cause so thought I would share it for future reference.
You can always invoke a cmd shell with administrator rights (or any other runas method), and use a tool such as SETX to modify the path permanently. Existing shells and/or running programs will probably be using the old path, but any new shell/program will use the new settings.
For accounts without admin privileges:
Open "User Accounts" and choose "Change my environment variables" (http://support.microsoft.com/kb/931715).
This dialog will show you your current user variables as well as the system variables. You may need to add a local PATH variable if you haven't already.
To update your Path to include the Python 3.3 directory, for instance, click New:
Variable Name: PATH Variable Value: %PATH%C:Python33
This creates a local PATH by taking the current system PATH and adding to it.
Wednesday, 25 February 2015
Geoserver - OpenLayers 3: Cross-Origin Request Blocked: The Same Origin Policy disallows
Using OpenLayers 3, I cannot get this message to go away:
I have tried setting the crossOrigin setting to:
I only see the zoom in/out control but the layer is not rendered.
I went with simon's option 3 below. I enabled CORS in GeoServer by copying the necessary jetty-servlets jar files and enabling it in the WEB-INFweb.xml:
After I did that, I tested the page again and receive the same error:
Looks like I am still missing something. Do I have to do anything from the OpenLayers Side?
I ended up getting rid of Jetty and uninstalling GeoServer completely. The problem is when you install the geoserver windows installer, it installs a version of jetty that is 4 years old! (Jetty version 6.1.8) Even though I had copied the jar files for CORS, it is only supported in Jetty 7+.
I found out that you can install a WAR file. I decided to use Tomcat since that is what GeoServer is mostly tested on according to this note from the GeoServer website:
Note GeoServer has been mostly tested using Tomcat, and therefore these instructions may not work with other container applications.
These are the instructions for installing the WAR file:
This is a nice how-to video also:
After you complete the install, you then enable CORS:
Spatial database - Need a vector format that is editable in QGIS and supports >10 character column names
For reasons, I need to create a new column in a vector shapefile with a name longer than 10 characters, and export to mapinfo TAB format. I know .shp's don't support >10 character column names, so I'm looking for an intermediate format that's both editable in QGIS, allows addition of columns after features have been added, and supports >10 character UPPERCASE column names. All of the vector formats seem to support some combination of these, but not all three.
Anybody have this problem in the past, and/or know of a format that supports this use case?
Any SQL-compliant spatial database should work, including PostGIS (if you happen to be running a PostGIS server) or SpatiaLite (but note quirks about behavior mentioned below).
Either one will satisfy your criteria:
Identifiers (including column names) can be up to 63 characters in a default PostGIS installation (and this can be increased by changing the server's NAMEDATALEN constant). I can't find any limit to column name length in SQLite docs, and blog posts seem to confirm that there aren't any.
In general, SQL developers will try to avoid using case-sensitive identifiers. However, case-sensitive identifiers can be created using quoting. Without quoting, some databases (Oracle) will fold identifiers to UPPERCASE, some (PostgreSQL/PostGIS) will fold identifiers to lowercase, and some (SQLite/SpatiaLite, SQL Server) are case-preserving but not case-sensitive.
In PostGIS you can force uppercase by using quoted identifiers:
Note that since I never do this kind of table creation in QGIS DB Manager, I don't know whether you can force uppercase column names that way, but you can do so using other management tools or writing the SQL yourself (which you can submit using DB Manager's SQL Editor) as demonstrated above.
This will require you to use quoting in any SQL that you write, as SELECT COLUMN_NAME_1. will be folded to SELECT column_name_1. internally, and column_name_1 does not equal COLUMN_NAME_1 .
For SpatiaLite, the behavior is a little strange. SQLite will preserve the case of the column as created (whether or not you quote the identifier), and will also preserve it on export, but in any SQL that you write the column name will be treated in a case-insensitive manner. Any combination of upper and lower case letters will be accepted regardless of quoting, and SQLite will treat COLUMN_NAME_1 and column_name_1 as a conflicting names and will not let you create both columns in the same table. (PostGIS, OTH, will allow names that differ only in case.)
For either PostGIS or SpatiaLite, the case will be preserved if you load the layer in QGIS and export to TAB using the Save As dialog.
More info on identifier case-sensitivity is available in this extremely informative blog post: http://www.alberton.info/dbms_identifiers_and_case_sensitivity.html
NOTE: Original answer only discussed PostGIS. Based on answer by @user30184, I have significantly changed answer to include information about SpatiaLite.
Making stops not in certain sequence in order to find shortest route with ArcGIS Network Analyst?
Is it possible to make stops not in a certain sequence in order to find the shortest route with the network analyst?
I now use the small green points as an input stops because I want the lines with a pink dot beside it in the route. Then I reorder them and don't preserve the end and startpoint. I thought this meant that the network analyst doesn't make a real sequence of the input stops and doesn't choose a real start and end point, and just finds the shortest route in between the points.
However if I look at the output of the tool (light green), it seems like the network analyst tool still uses a kind of sequence and begin and endpoint. Because in the picture it looks like the point where the blue arrow points to is a start or end point. The red line is the route I want. So the route makes a round now (at the left where the black arrow is) going past the south segment instead of going to the north and connect via the small grey link near the blue arrow (like the red route I want). The green output I now receive is not the shortest route.
I thought that one problem could be that the point near the blue arrow is assigned by the tool as start or end point and the other problem that the network analyst takes segment near the black arrow because the point in the south west (black circle) comes later in the sequence than the point in the middle (black circle). So, it gives each point 1,2,3 and so on, instead of tread them like all 0,0,0. So, I thought that this could be solved by having the stops as equal points (all zeros or something) and calculating the shortest route in between those instead of having the stops as a sequence and going from 1 to 2 and 3 and so on. But I don't know if this is right, see also last comment of @ChrisW, Using line feature as stops input in network analyst?.