I wanted to run by you guys an idea I would like to complete, but don’t really know where to start or what engine (if any) gives the abilities I need.
The IDEAL that I am working off of is called “Starlight Visualizatipn Systerms” (http://starlight.pnnl.gov), created by the Pacific Northwest National Labs and funded by Battelle (a national security research/thinktank (they invented Xerox for example)). Ironically, I attended Ohio State University, and Battelle is right next door and is funded by the school (in part), yet the software is only used for military and corporate intelligence (despite my interest, the plethora of uses in academia).
Anyway, I know I will never attain ANYWHERE that level of functionality (it is my IDEAL from which I will base ideas, mostly proofs of concept).
The jist is this…the program (Starlight anyway) uses XML and a variety of other interworkable markup scripts and linked data bases to import large amounts of disparate data. In some sense, it works like 3d Logic Mind Mapping, seen in simple (but unextensible form here in Blender, https://blenderartists.org/forum/showthread.php?361006-3D-Logic-Mind-Mapping-(-update-vimeo-video-show-and-blend-file-at-5-) ).
In any case, as an example, I would like it to be automated to some extent, although that is unlikely. Something more likely would be a set of preexisting linked databases, or something like that.
Two “simple” ideas, just for practice and to play around with would be, for example, botth starting off with an essentially blank screen including nothing but three visible axes.
From there, one of the two ideas would be for blender to ask for a directory, which when chosen, would show a visual graph tree starting from the initial directory, and subdirectories, perhaps asking how many levels of subdirectories to open immediately. These would be indicated by some predetermined, or possibly customizable, set of symbols, with directory names either shown, abbreviated, or hidden, simply to display the topology. The camera would be able to rotate, move, zoom, or zoom in on a particular element of the tree. Click each level of the tree would expand or close visible directories and files. The graph might be able to be changed to display information in a different way, such as selecting a different directory to act as the central point from which to view the rest. Perhaps options could be made to have a simple linear downward tree, or one that branches downware with each subdirectory show circularly, more like a real tree, or to truly display the graph spherically. A neat addition would be to be able to manually or automatically select sections of the graph and display a subgraph including just those components, perhaps with some type of symbology denoting some aspect of the nodes that have been hidden.
As a proof of concept, it wouldn’t be very useful, per se, but would be very interesting. Especially because the data would be loaded and related at runtime. Unfortunately, one of the closest examples I could find, the .blend listed above, (and ignoring the j3d blender 3d graph, which I have not much looked into and appears to function within Blender, not the BGE), but that .blend, all aspects of the tree (Chapters, subchapter, content, etc) is not determined at runtime, is pre-produced, and relies on keyframes animations of pre-produced bones (rather than more general animations that can dynamically adjust to different sized data sets, without pre-producing bones and the data).
A second, and even more daunting example, although closer to my true intent, would lie in visualizing academic papers. Ideally, the content would be generated by an additional element, a preproduction program or something, searching through a set of given articles (although we all saw what happened with Aaron Swartz attempting to automate the collection, and possible parsing of papers behind paywalls). A more reasonable work around would be for a student, who has already legally collected papers, known to have intertextuality, or someone who could legally and intentionall build a larger data set. From here, probably starting manually (as a proof of concept, and obviously requiring an immense amount of work) several databases could be constructed starting with the particular journals, or even the route of accesa to the journal, followed by the author, year published, key words (which are often given), and though one would ideally be able to dynamically search the paper’s text, or even just the abstract (which is much easier if done manually and prechosen text searches are included), which links two or more papers together dynamically based on the strength of that relationship (number of text hits as a measure? Other measures?). Then these relationships would be viewed and manipulated much like the previous “directory tree” example. As my particular interests are geography, sociology, politics, and philosophy, perhaps a map or set of maps could be generated (from Blender and OpenStreetMap?) that link papers to study location, publication location, author nationality to the extent it is relevant, etc.
I understand this would all be difficult, if not impossible to do without a custom engine of some sort. But to what degree could any of it be done in BGE? Unity? Unreal? Outside scripts could be run first, or premade linked databases just to help prove a concept.
What thoughts might anyone have on this? On if any engines would be capable of creating something similar? On methods, possible pseudo-pseudo-code or pseudo-UML helping to describe methods, and noting what is possible and impossible, but necessary, and in what engines? What other engines might help? VTK? Can blender (particulary BGE) be linked to this or vice versa, or any other engine?
Thank you in advance if anyone has any thought or suggestions. I am in no way saying I am capable of pulling elements of this off, but I would like to try, especially of there are any interesting suggestions!