Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Split\Join gbXML #148

Open
EK-CEL opened this issue Oct 17, 2018 · 5 comments
Open

Split\Join gbXML #148

EK-CEL opened this issue Oct 17, 2018 · 5 comments

Comments

@EK-CEL
Copy link
Collaborator

EK-CEL commented Oct 17, 2018

Quite often we have large multistory hospital projects with thousands of spaces. Such building is very slow in the Spider viewer. It took me 35 minutes to upload my file and get to the Levels list. It is unrealistic to review it an make required adjustments.
It would be nice to have an application (probably a Spider module) which can split a gbXML file by Levels for a number of partial files. Then I can upload each file and review it. After all partial files are reviewed I can join them together back to a single gbXML file.
Is it possible?

Thank you.
EK

@mdengusiak
Copy link
Member

@EK-CEL
I just did test and loaded 143 MB very complex model with 1989 spaces and it took me 95 sek to load.
However it is interesting idea to load only one level...problem might be what we can see when we click wall to space that is missing.
We will discuss this..
thanks

@mdengusiak
Copy link
Member

@EK-CEL you can see by storey
image

so can see only needed floor
image

@EK-CEL
Copy link
Collaborator Author

EK-CEL commented Oct 17, 2018

@mdengusiak

I just did test and loaded 143 MB very complex model with 1989 spaces and it took me 95 sek to load.

Yes, upload is fast enough. But then the screen freezes for regeneration. When I get the model on screen I click the Review button and wait again. So in total it took me more than half hour. How long is it for you?

Yes, I use the Storeys list to review the model isolating it level by level.
Even with this mode selecting surfaces and navigation is slow. Typically it takes 5-10, up to 30 seconds. It is psychologically uncomfortable. :-)
Spider can isolate the level, but not completely as every click refers to the entire database. Probably the problem with large files can be solved by "deeper isolation" of graphics shown on screen.

@theo-armour
Copy link
Member

@EK-CEL @mdengusiak

Thank you both for your suggestions and comments.

There is much that can be done to improve speed via selective loading processes.

The issue from my end is that I am no longer in practice and thus have zero personal access to sample files let alone nice, large files that would be good for coding and testing.

The lagest file I currently have access to is this one at 20MB:

https://github.com/ladybug-tools/spider/blob/master/gbxml-sample-files/coventry-university-of-warwick-small.xml

So please make some big files available - publicly or privately - and I will have a go at selective loading ideas.

And the initial request has been added to the To Do / Wish List

@theo-armour
Copy link
Member

@EK-CEL

image
Spider gbXML Hacker R1.0 screen capture

I received your file. Thank you for sharing. It will, as you requested, remain private.

An interesting thing: The file breaks the Chrome browser on Windows 10, the FireFox browser on Windows 10 and OpenStudio on Windows 10. And yet XML is a native format for all these apps.

Of course it also breaks the Spider Viewers as well because they rely on the browser's XML parser.

The thing is that you can also read XML files as text. I have written a script that does this and have started to explore the contents of your file.

  • File size: 698,000,000 bytes
  • Lines of text: 8,746,903
  • Spaces: 5,550
  • Surfaces: 70,774
  • Zones: 2,618

You can play with the script here:

Full Screen: Spider gbXML Hacker R1.0

Read Me

With Chrome browser on Windows 10/Core i7/Nvidia GPU reads the file and displays limited text data for the file in under 15 seconds. No attempts are made to display data in 3D in this release

  • Opens file
  • Splits data into an array of trimmed lines
  • Searches the array for selected text data such as 'space', 'surface' and 'zone'

Frankly, I am astounded that browsers can read files that are 2/3 of a gigabyte in size. I am astonished that the data can be loaded into RAM with JavaScript in a few seconds. And I am beside myself that a derp programmer such as myself can produce a scripts that parses the data in a few more seconds.

Anyway, this open up new possibilities for dealing with gbXML files not as XML but as raw text. This would take a new code base or different route than the current viewers, I am quite happy about doing this. Who knows maybe some of the techniques could speed up the existing viewers.

The thing is that we should be quite clear about the goals and objectives of a project like this. What do people really want to do with their huge gbXML files? How can manipulating these files in some way and up producing better lit, more energy efficient buildings? These are complicated issues and the more brains on the project the better.

I am wondering. Would you be will to phrase a question or state the problems on the Ladybug Tools forum? Something about: what people need or want with big gbXML files that are high priority?

https://discourse.ladybug.tools/c/spider

If you do that, I will do my best to come up with some demos in response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants