Bliss 3D Toolkit

ZOMG, you can make games in your game! :smiley:

an engine you can code inside of?

This is a secret preview of my ā€œgameā€; not sure if the world is ready for this: :RocknRoll:

Mind = blown

The problem with using a traditional 3D engine for large worlds with such volumes of entities to be drawn, is that theyā€™re not designed for such a task. Many people here see Minecraft and think of using many cubes, whilst in reality Minecraft employs a number of algorithms to reduce the necessary calculations per frame. Itā€™s running an engine designed for that, which is rather important.

You are correct though, in your observation. I think that it is simply the fact that developers often undertake projects as a learning experience. This means that theyā€™d rather do something worse than someone else (because, in reality thereā€™s nearly always someone ā€œbetterā€) because itā€™s their code. This is not an underhand remark aimed towards Mahalin :slight_smile:

@Mahalin
Ah, now I understand. How are you interfacing with the output from the window manager? Whatā€™s the latency like?

@BPR
I figured, why canā€™t we use the game engine itself to make itself? :stuck_out_tongue:

@Agoose
I use the same method as Compiz: GLX_EXT_texture_from_pixmap, so itā€™s all done on the GPU. Because this uses X Window OpenGL extensions, itā€™s only possible in Linux.

Iā€™ve briefly looked into Windows (WGL) and Mac (CGL) APIs but they donā€™t really povide this functionality. Iā€™ve seen 3D desktop programs for them but they never integrated well. I mean if it was feasible, I wouldā€™ve come across it probably. In theory, you can also do screen captures but the latency is not feasible due to the raw data and kind of hackish (actually that was my first attempt).

Itā€™s fluid and smooth. The videos are jerky due to recording and my naturally twitchy mouse movements lol. So as I design Extremis, it is influenced towards open completely-dynamic worlds. I have to say, everytime I use it, I always trip out a little lol.

In these videos, Iā€™m launching from the boot-up terminal using xinit and startx. It is meant to replace the desktop so in the .xinitrc, instead of running e.g. gnome-session, you launch blenderplayer with the ā€œgameā€.

Hereā€™s a Blender demo:

Anything that works in the BGE works as well. Here I have a logic brick spinning the window:

I donā€™t have a video but transparency also works. I have a special object-oriented 3D menu system Iā€™ve created, but itā€™s not polished yet for demonstration.

Originally, I wanted to do a demonstration where there is a monkey head and Blender is open with that model open, and changes in Blender are updated to the BGE in realtime. With Extremis, I can push this further so changes can go back and forth!

The problem I see with that approach is nobody wants to wear something on their fingers due to the inconvenience. A simple exampe: what if youā€™re eating or something?

I saw something similar to it before:

My views on various interaction methods/devices:

  • 3D cameras (Kinect, LEAP, etc) - no solution for gorilla arm*; occlusion and lighting problems which make accuracy inconsistent
  • Wearable devices - too inconvenient outside of specific situations. The key here is not to wear anything on the fingers. Unfortunately, thatā€™s where we need to gather informationā€¦
  • I seriously thought about building some sort of exo-suit for the shoulder :smiley:

However, I believe wearable computing is the future. Thatā€™s why I think the Myo is the best solution so far. I have one on the way:

However, Iā€™ve read conflicting reports on how well it can track individual fingers. Also it canā€™t track absolute position.

I designed Flow UI for productivity and efficiency, and itā€™s optimized for traditional mouse/keyboards, and touchscreens. Although, Iā€™ve used a gesture camera with it, SpaceNav, Playstation controller, etc. It has potential for other input methods but I havenā€™t seen anything that can beat the mouse/keyboard combo. While I was working on Flow UI, I came up with another idea which is similar to swyping but in theory, faster, more accurate, and can be done without looking.

Voodoo magic!

Add support for google cardboard ? :smiley:

That Flow UI is like a virtual machine running inside the game engine?

Thanks. That actually makes a lot of sense to me, and explains also why the game engine portion does indeed seem to lack a lot of use and development.

ā€¦not surprised this is doable in linux. Very interesting work! I hope you succeed, whatever the end goal is :slight_smile: Iā€™m making my own ā€˜the-bge-isnā€™t-quite-sufficient-engineā€™ too, in D. Nice to know Iā€™m not alone.

Btw, for your sanity and productivity, please, donā€™t switch to C++ ;__; Being someone who spent over 11 years enthusiastically learning and using C++ practically as a second language, of my own volition and for personal game-development endeavors, acquiring in the process a relatively deep understanding of its features and an (in hindsight) horribly misjudged sense of pride in using it, I feel qualified to suggest you googlesearch ā€˜fqaā€™. I donā€™t even recognise the name cee-plus-plus anymore; all I hear is ā€˜eldritch bureaucratic abominationā€™.

Yeah, Iā€™ve seen that FQA, very enlightening lol, although, C++ has its place like every other language.

Same to you! Itā€™s always a good learning experience to roll your own game engine; fortunately, today itā€™s not too difficult for simpler things, unlike the old days where assembly was regularly used (first Doom engine). Itā€™s like working in a higher level language but also knowing some C behind it so you have at least some idea of whatā€™s going on behind the scenes.

It looks like everyone is searching for an alternative which is a good thing. However, I have the feeling the BGE is leaning towards Panda3D.

Anyhow, update time!

In the demo, I implemented a basic mouselook camera. Left click on the cube applies force away from the camera and right click applies force towards the camera.

  • Refactored database
    [LIST=1]

  • Originally, I used the converter code from BGE, but I wrote a new implementation using Bmesh. This means meshes now need to be in Bmesh format along with triangles.

  • In the future, I might remove the usage of the Blender shared library, depending on how much of the API is needed.

  • Refactored rasterizer

  • Basic optimizations
    [LIST=1]

  • Render by similar program/materials/textures to reduce state changes and draw calls

  • Backface culling

[/LIST]

  • Rewrote math module in Python

  • Numpy not available yet for PyPy.

  • Implemented more physics features

  • World timing, step size, etc.

  • Basic collision shapes: plane, box, cylinder, etc. (still need to test convex and triangle mesh)

  • Functions: position, rotation, mass, angular/linear velocities, add force, add torque, etc.

  • Ray casting

  • Implemented input module

  • Callback based; handles keyboard, mouse, joysticks, 3d mice, gesture cameras, etc.

  • Keeps track of input information including double-clicking, single-clicking.

  • Originally, I created this for Flow on BGE, so I simply ported it over.

  • Implemented more camera features

  • Viewports, orthographic mode

  • Implemented asset management

  • This is analagous to BGEā€™s LibLoad system.

  • You can import any type of datablock.

  • All data is available to the user.

  • Implemented scene management

  • This is analgous to BGEā€™s overlay/underlays

  • Implemented user classes

  • This is analagous to subclassing KX_GameObject etc.

  • This allows the user to subclass any data type - scene, world, object, empty, camera, lamp, mesh, material, texture, etc.

  • User specifies the class name as a custom property in Blender (similar to BGEā€™s Python controller in module mode). This class is then defined in a Python file in a specified directory to be looked up at runtime.

  • User can also create instances during runtime, which was not possible in the BGE. The data can come from an existing object in the database, another blend file that can be loaded, or you can manually create all the object data if you want.
    [LIST=1]

  • e.g. This allows for dynamic creation of geometry (however VBOs would need to be updated etc.)

[/LIST]

  • Implemented basic sound system

  • This uses FFmpeg and PulseAudio.

  • Implemented Pythonā€™s built-in profiling and logging module

  • This has already proven useful :yes:

  • In addition, Pythonā€™s standard library includes the pdb module which is a interactive source code debugger.

[/LIST]

So Phase 1 is mostly done. Phase 1ā€™s goal was to setup the overall architecture as I want to get the foundation right before adding more advanced features. Itā€™s pretty crude right now, thus alpha-level.

Last thing I need to do is frustrum culling, maybe texture blending although this is done through shaders now. The next step is to port Flow over so I can do a real ā€œgameā€ with the engine -> more refactoring/bug fixing. Anyhow, I rather release it with a decent example to get things started.

I can get away without doing lighting for now, since I just need shadeless mode. Although lighting is doable right now, since the light objects and material information are available. One just needs to write the appropriate shader script. Iā€™ll save all that work for Phase 2ā€¦

Phase 2 Goals:

  • Instancing

  • Lighting

  • LOD
    [LIST=1]

  • Mesh, material, shader levels

  • Animations

  • Hardware skinning

  • Text (drawing)

  • Parenting

  • Object and per-vertex

  • This is where scene graph is really needed, space-partitioning maybe.

  • Shadows

  • Particle System

  • Blender File Features

  • Groups

  • Linking?

  • More physics

  • Composite objects (compound in BGE/Bullet)

  • Polygon sorting/transparency

  • Stereoscopy
    [/LIST]

Any chance you could include rigged ragdolls in the ā€œrootā€ of the armature code?

(a system automagically generate 6d0f ragdolls, and then attempt to match ik actions?)

If I ever get a 64 bit cpuā€¦ I would love to port wrectified to something that can handle AAA graphics without endless optimizations.

ODE is popular among the robotics crowd so I donā€™t know if that is relevantā€¦

http://ode-wiki.org/wiki/index.php?title=HOWTO_rag-doll
http://monsterden.net/software/ragdoll-pyode-tutorial

Care to explain more about the game script api, cos i dont get that part?
Is it still gonna be agpl?

  • I am thinking about creating a better game engine UI so only the game engine relevant features are shown in Blender (personally, I find this a real inconvenience).

I understand this part, but i guess that will show whats been supported. But then again, the amount of users are important.

Oh, I was referring to how in Blender you canā€™t really tell which features are used in the game engine other than the explicit ones.

Iā€™ve been working on a blend filer parser for reading the structs from the Blender file, which could replace relying on Blender. If that dependency is removed, then technically, Harmony doesnā€™t have to be GPL. LGPL is another option, but since itā€™s in Python, the source code is openly available, and encryption or obfuscating the code would be needed.

I see, thx for the info. I found this interesting doc ā€œOptimized View Frustum Culling Algorithms:ā€ http://www.cse.chalmers.se/~uffe/vfc.pdf It might be useful. Still why dont you use some rendering library and build on top of it, like Bgfx? Is the game engine just a personal project or are you going to target a lot of users at the end?

This project looks very interesting! I have waited for so long to see proper open source engine using modern OpenGL. If this integrate well with Blender, then it will be perfect!

Do you have plan about multipass material? Some kind of techniques on OGRE or Subshader on Unity. BGE sometimes make me bash my head on the wall because it only allow one pass of shader.

Pretty much all of the open-source game engines use the older OpenGL API, understandably so. Having to support both modes makes for more work no matter how well the encapsulation - I wanted a clean modern focused codebase.

Iā€™ve been looking into render to texture, FBOs, multipass rendering, etc. Eventually I will implement those some time later, still need basic lighting.

Off the top of my head, to implement multipass rendering i.e. running multiple programs on an object, isnā€™t too difficult. Currently, Iā€™ve started off simple and have only 2 default programs: 1 for vertex colors and 1 for texture mapping. The engine is already shader based so itā€™s like adding another shader to the objectā€™s list of shaders, which would require some refactoring.

Hereā€™s an idea of what it would take:
http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-14-render-to-texture/

Random, but just found this article on how shadow volumes can be generated with geometry shaders:
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch11.html

Just take your time, I will keep an eye on this thread. :yes:

Ogre 2.0 has interesting material system:
http://yosoygames.com.ar/wp/2014/05/a-glimpse-of-whats-comming-to-ogre-2-0-final/
Itā€™s really interesting because the material not only data driven can also contains C++ code. Maybe you can get inspired from this system. :eyebrowlift2:

Really interesting project, isnā€™t there any repository or something to view the code in which you are working on? It will be interesting for learning (Iā€™m always interested in learning new things about C/C++).

Itā€™s still undergoing fundamental changes right now so itā€™s not quite there yet. Currently, Iā€™m working on porting my original project from the BGE. Iā€™m aiming for a release of both the engine and Flow by the end of the month - so far Iā€™m on schedule ;). Itā€™ll be on GitHub here.

What heā€™s implementing in that link, reminds me of Blenderā€™s system. Hmm, I found this comment interesting from that Ogre link:

Itā€™s great to hear the work is progressing and the features are very useful and much appreciated. Unfortunately the fact that you are working in isolation is a problem for me.

I am an experienced Ogre developer, and stakeholder with several commercial products leveraging Ogre, but my attempts to establish communication with the core members have failed. My bug reports, patch submissions go unused, and recommendations fall on deaf ears. I still find many critical bugs and much needed feature additions. Lately I have been making local modifications and I find it difficult to justify the time to submit patches.

I understand Ogre is going through a lot of changes, but the community should be involved. I feel I am part of the community but not recognized. Ogre is not evolving the way I need which is forcing me to investigate alternatives.