How Were N64 And PS1 Textures Made Back In The Day? Need Help.

Hey Blender Community! I have a question and I hope you can help me out i’m done pulling my hair out. I apologize in advance if this is the wrong place to be posting this but this place seems fairly active. I will try to provide as much information as possible.

I’m sorry for the noobish question but I wish to recreate texture maps that were used back in the N64/PS1 Era days (Mostly N64). Again I’m sorry if this sounds like a “nooby” question but I can’t seem to find any information online on how they managed to get good looking textures back in the day. I’ve been searching and researching for a week now on how they made the texture maps and what the production pipeline/workflow was. If anyone worked on an N64 title back in the day could you please shed some light on this?

Most N64 textures are on average 64x64 bitmap images (Sometimes smaller or slightly bigger). However what I want to know is how big company Texture Artists/3D Artists back then made the textures, were they drawn pixel by pixel? Did they just downsize high resolution texture images? The N64 also used Mipmapping.

Here are some examples of what I’m trying to achieve, the bush and plant textures seem to be well done for their size. These are texture maps from N64 games:
https://imgur.com/a/tZrg5

For character textures what’s the point of this texture? I highlighted around the section with green. Was this made to “cheat” light reflection/gloss when applied to the 3D model?

What you are looking at is the texture for Donkey Kong from Donkey Kong 64:https://imgur.com/a/qlfBe

The Nintendo 64’s hardware had Bilinear interpolation which essentially “blurred” the texture maps, if this was not implemented the textures would essentially look like an unfiltered PS1 game, however the PS1 could store higher resolution texture maps due to the storage on the system. I don’t think production modeling software support this. How would I apply a texture map and obtain the same rendering results of the N64 instead of it looking like a pixilated mess?

If I am wrong about this please correct me. Here are some pictures and more information on Bilinear filtering: http://forum.devmaster.net/t/bilin-filtering-with-3-samples/18338/8

Here are examples without Bilinear filters:
https://github.com/gonetz/GLideN64/issues/306

Here is an image with the filter shut off on a game:
http://i.imgur.com/alKCKFT.png

How were shadow and lighting done with vertex lighting? Is it possible to recreate the look anymore? Someone made a post on this a year ago but not much was explained.

OP in the post included images:
https://www.reddit.com/r/Unity3D/comments/4wl7wo/help_resources_for_n64_style_lighting_and_shading/

Here’s some of my own that I took. Is it just standard point lights?:

Look on the floors that are “lit up” and back wall that is “dark”:
https://i.imgur.com/mA4Rct4.png

Another example, lit scene and dark scene with same assets:
https://imgur.com/a/TDxUZ

Lastly how were “reflection” effects pulled off?

Here are some examples:

Legend of Zelda Ocarina of time Mirror Shield and Majora’s Mask’s (Shield border):

https://imgur.com/a/SgtGL

Paper Mario Bush Details, Looks more “glossy” around the borders (Bushes in the background near the tree.):

https://imgur.com/a/TDxUZ
https://i.imgur.com/mffRiys.png

Would we now in modern 3D applications just “cheat” this with glossy and reflective materials?

I apologize in advance if this is straight forward. I might be over thinking this. I hope someone can shed some light on things and help :(. Thank you for reading, I tried to be as descriptive as possible.

Also back in the day they apparently used these programs, I 110% understand it’s not the tool you use but the creator:

3D Software: Multigen, Ningen, Power Animator, Wavefront, Softimage, 3D Studio, 3DS Max/Maya, Lightwave
Texture Art: Photoshop (Very old version) and Dpaint.

for reflect, I believe it’s just a ‘chrome ramp’ spherical gradient applied using ‘reflection’ mapping in blender internal*

you can make it more realistic by using a environment probe to generate a cubemap reflection,
and then use that as a reflection map*

Thank you for the reply. Still looking for solutions for the others. Thanks again.

I should probably clarify as well. I think this is more of a Photoshop or image editing question because it’s about making these certain kind of texture maps.

While I still use Blender from time to time for modeling. I mainly use 3DS max and Maya.

Main problem is, there isn’t a lot of “popular” forums to ask so I thought I might ask here.

Thanks again.

Its a bit hard to answer all of them sufficiently since I dont know your background. Give us a bit more.
Already collected experience in game developement ? Familiar with basic 3d game asset creation ?

Im asking because some of your questions suggest that you are well informed - and then again, others seem to be a bit obsolete. (The texture filtering one is the one I didnt get.)

MY GUESSES:

Yes, only basic lighting ontop of vertex lighting.
I guess the 3D models were unwrapped very differently back then. Scarce texture space…
So they probably just put (uv)vertices on one of the bright spots of the texture -which were corresponding to a (3D)vertice on [top of the shoulder], or the [top of the head]… Every part of the model that they wanted to shade brighter - the nearest (3D)vert corresponding (uv)vert got a bright spot on the texture map.
The reason is that you would get a very crude light gradient on the model which surely looked better then only vertex color.

The vertex lighting would then work ontop of it.
Just my guess! :slight_smile:

Apply a texture in an engine ? Or in blender ?
Every (good) engine has access to texture filtering, blender definately has it. (In the Texture nodes. “interpolation method”)
By default its set to linear.

If you want the PS1 style texture filtering, you look for “point filtering” in 3Dengines or “closest” in blender.

True vertex lighting in blender ?
Thats actually an interesting question.

Boys and Girls, how we go about this ? :slight_smile:

No, probably just vertex colored floor. Lights were quite expensive.
The general rule of “forward rendering” is: For every additional light source that the mesh affects, the mesh has to be rendered again.
Thats why I doubt that there were any point lights involved in your shots. (Or any lights at all.)

Yes.
Back then they probably just mapped “some” abstract texture with a reflection vector instead of the usual UV coords.
As BPR suggested:

This is a great thread already, gj :slight_smile:

You can get a very superficial insight on what they might have been using back then here:

Pixel by pixel is more of a retro thing for pixel artists. One I know just uses the pencil tool in PS to create his art.
But Id argue thats a bit different from what you are looking for.
Id suggest using the regular tools (brush) and practising the “handdrawn” style via youtube tutorials. :slight_smile:

Is the specific 2D software important to you ?
And why are you mentioning mipmapping in this context if I might ask ?

@rbx775

Hey thanks for the reply, this will help a bit (Even if it’s speculation). Yes, I am fairly experienced with modeling. It’s fun and rewarding. Sadly there is not much information on what I want to know, which is why I put effort into this post. Also I have used a range of Game Engines and 3D applications. So yes it’s safe for everyone to assume I know at least the basics or more.

For the reflection effect i’ll probably just use a cubemap to re create it. (As stated by someone else).

I just mentioned mipmapping in case it involved something dealing with the design of the texture.

I just wanted to know if there were methods and steps in creating these types of textures (They are small). The ones that I linked in the album can be found in mostly all N64 games in that style.

Pretty much I was asking if there was a faster way of creating them rather than painting in pixel by pixel or manually doing the shading in the texture, I know the “Star Door” texture has some shading on it. But others seem just a mess if you were to create or edit them.

Did they downsize high resolution images? Did the software back then allow them to do things faster? It seems fairly tedious if the artists did do it pixel by pixel.