Sdfgeoff's node shaders and other experiments

what about the ability to convert your ship into energy and tunnel through circuits, but you must configure the circuits correctly to end up where you need to be?

same circuts can conduct power, or maybe light?

real space, wire space, and virtual space?

travel in real space -> enter computer maze(unlock computer in virtual space) -> close connection in wire in real space-> travel through real space-> send virtual avatar down wire?

Here’s what I’ve conglomerated, and is the current state of the concept.

By looking around the environment and pressing a ‘ping’ button, structures within the wall become visible:

By flying up to the screen and docking (hitting ‘ping’ again), you an interface. It’s based on logic bricks and looks like this (concept art):

And another panel for another interface, this one is a distribution box that handles routing of physical resources.

The puzzles revolve around getting the environment to fulfil a function.
Interestingly this was designed before reading BluePrints post.

You can also play with the idea of an insect sized drone in a human sized environment. Can’t open a door? Fly through a vent instead.

This is probably also going to come into it. The vehicle being flown is small, not insect sized, but ~20cm across, and thus capable of navigating many unexpected places.

And a video of the scan mechanic:

Interestingly, the tunnel scan and the pipe-scan are implemented differently.
Scanning the tunnels is by way of a 2D filter. By examining the depth buffer, a shockwave is sent out.
But for the pipes, which have to be visible through the geometry another trick is required. The trick is using the z-offset on a material. Then the shockwave is controlled by object color. Maximum distance on one channel, percentage of distance on another. Seems to run pretty well.
My brain works far better in nodes than it does in GLSL, though I guess that’s just practice.

Tomorrow I’ll look at GUI’s I guess, and how to make the puzzle mechanic.

1 Like

Looks great! If you wanted to you could do the pipes in an overlay screen with the camera position mapped exactly to the main screen. Usually there’s a problem with a one frame offset making things look a little wonky, but I seem to remember Monster found a good work around for that. It was using the post or predraw callbacks maybe…

rotate ‘dummy’ camera- set overlay camera and main camera to dummy’s angle (so overlay and main are 1 behind dummy) but they match each other so no 1 frame issue*

A generic starting blend for testing models. It contains a basic outdoor lighting setup and movement around the map.


Uses the 2D filters I developed earlier in this thread to do chromatic abberation (subtle), vignette (subtle) and barrel distortion (subtle).
Blend:
Walking.blend (1.09 MB)

Looks great!
Strange thing, when I use the shader on my own scene everything comes out dark blue…

set the blurPasses to 1.0, if 0 its blue.

and blurr only has setting 0 for blue 1 for on? due to upto 1000 nothing changes

Do you have anything against using mouselook scripts?Because i would like to see how the sky looks.

@smoking_mirror and @cotax
Thanks for pointing that out. I’ll try fix that this evening.
blurPasses is not blurring the camera, it’s the number of samples to use for the chromatic abberation. Yeah, it’s a bad name.

@Lostscience:
Yeah, it’s more of a RTS-style thing. It was developed so I would have an environment to test my walking algorithm for a uni assignment:


I’ll see if I can make it into a simple FPS-style thing this evening.
But for reference, the sky looks like this:


It’s a solid noise-covered sphere. You can enable it’s visibility in the outliner if you want to see it.
The sky is a circle with vertex colour and additive alpha. Again, you can change the visibility in the outliner. I just hide them so I can’t see them when I’m working.

Does uni stand for unity game engine?

University.

Yup, uni stands for university. I’m currently studying Mechatronics (robotics) at my local university.

Here’s a video for you all:

The inverse kinematics is calculated manually, as are a whole heap of the transforms. I now thing I understand what reference frames are all about and how to use them!

Originally the assignment had the student calculate the kinematics (forward and inverse) for a single robot arm. I stuck four of them on a robot and used an algorithm I’ve been thinking about for many years to decide where to move the legs.
The blue trail coming from the one leg is because, as part of the assignment, we have to show we can control the motion of the robot ‘arm’

Interesting. I’ve always thought IK must be useful in robotics.

IK is a key component to any jointed robot, though it is hard to do and computationally expensive. In the above video I calculated it analytically using the geomtric method (drawing diagrams and applying pythagorus and cosine rule). For more complex systems you can solve it numerically using the Jacobian (derived from the forward kinematics - trivial) to ‘experimentally’ determine what effect each joint has on the end position, and from that approximate a solution.

Hmm, maybe it’s time to. Uh. Why did I build this again?



It’s sitting on my desk next to me, but I don’t have an embedded processor with enough processing power to handle the IK. I built it several months ago and it’s been awaiting inspiration!

1 Like

Have you ever seen this:

I wondered how much use it would be in robotics. It seems like it’s not so easy to just take IK calculations and transfer them to the real world.

Have you looked at other ECS, like the intel compute stick or the RPI?

@agoose:
No, I hadn’t actually! That is a fantastic suggestion. I’ve got a pi sitting around in another project, so I could use that, or I could perhaps get my hands on one of the “Chip” $9 computers in a few months. The pi is quite a big thing compared to the walking robot.

Why not run it via the pi on a tether until you’ve determined the necessary amount of compute power required?

@phil: That’s a good idea.

Well, robotics is a popular topic here it seems! (or maybe it’s just that it walks?) Here’s another one done for the same course. This one I used the built in IK (so don’t blame me for the glitch), but had to do path planning. I built this as a simulator so I could test it not on the actual robot. This one you can control over a network by sending URScript (which I had to parse).

I’ve got yet another blender robot simulation which I’ll record this evening.

@smoking_mirror:
Nice find. I showed that to a couple of my peers and we had a good laugh. Learning algorithms is an area I find fascinating and it is likely I’ll dedicate a couple weeks to it at some point.
Applying IK to the real world? Well, if it was an ideal world you could just take it straight off the computer into the real world, but there are nuasances like:

  • Acceleration limits on joints/other physical constraints that are hard to determine
  • Gravity (the main inaccuracy of that previous simulation)
  • Lack of global knowledge (where am I in global space is actually a very hard problem)