Quantcast
Channel: Codrops
Viewing all 1537 articles
Browse latest View live

High-speed Light Trails in Three.js

$
0
0

Sometimes I tactically check Pinterest for inspiration and creative exploration. Although one could also call it chronic procrastinating, I always find captivating ideas for new WebGL projects. That’s the way I started my last water distortion effect.

Today’s tutorial is inspired by this alternative Akira poster. It has this beautiful traffic time lapse with infinite lights fading into the distance:

Akira

Based on this creative effect, I decided to re-create the poster vibe but make it real-time, infinite and also customizable. All in the comfort of your browser!

Through this article, we’ll use Three.js and learn how to:

  1. instantiate geometries to create thousands (up to millions) of lights
  2. make the lights move in an infinite loop
  3. create frame rate independent animations to keep them consistent on all devices
  4. and finally, create modular distortions to ease the creation of new distortions or changes to existing ones

It’s going to be an intermediate tutorial, and we’re going to skip over the basic Three.js setup. This tutorial assumes that you are familiar with the basics of Three.js.

Preparing the road and camera

To begin we’ll create a new Road class to encapsulate all the logic for our plane. It’s going to be a basic PlaneBufferGeometry with its height being the road’s length.

We want this plane to be flat on the ground and going further way. But Three.js creates a vertical plane at the center of the scene. We’ll rotate it on the x-axis to make it flat on the ground (y-axis).

We’ll also move it by half it’s length on the z-axis to position the start of the plane at the center of the scene.

We’re moving it on the z-axis because position translation happens after the rotation. While we set the plane’s length on the y-axis, after the rotation, the length is on the z-axis.

export class Road {
  constructor(webgl, options) {
    this.webgl = webgl;
    this.options = options;
  }
  init() {
    const options = this.options;
    const geometry = new THREE.PlaneBufferGeometry(
      options.width,
      options.length,
      20,
      200
    );
    const material = new THREE.ShaderMaterial({ 
       	fragmentShader, 
        vertexShader,
        uniforms: {
           uColor:  new THREE.Uniform(new THREE.Color(0x101012)) 
        }
    });
    const mesh = new THREE.Mesh(geometry, material);

    mesh.rotation.x = -Math.PI / 2;
    mesh.position.z = -options.length / 2;

    this.webgl.scene.add(mesh);
  }
}
const fragmentShader = `
    uniform vec3 uColor;
	void main(){
        gl_FragColor = vec4(uColor,1.);
    }
`;
const vertexShader = `
	void main(){
        vec3 transformed = position.xyz;
        gl_Position = projectionMatrix * modelViewMatrix * vec4(transformed.xyz, 1.);
	}
`

After rotating our plane, you’ll notice that it disappeared. It’s exactly lined up with the camera. We’ll have to move the camera a bit up the y-axis for a better shot of the plane.

We’ll also instantiate and initiate our plane and move it on the z-axis a bit to avoid any issues when we add the distortion later on:

class App {
	constructor(container, options){
		super(container);
		
        this.camera.position.z = -4;
        this.camera.position.y = 7;
        this.camera.position.x = 0;
        
        this.road = new Road(this, options);
	}
	init(){
        this.road.init();
        this.tick();
	}
}

If something is not working or looking right, zooming out the camera in the z-axis can help bring things into perspective.

Creating the lights

For the lights, we’ll create a CarLights class with a single tube geometry. We’ll use this single tube geometry as a base for all other lights.

All our tubes are going to have different lengths and radii. So, we’ll set the original tube’s length and radius to 1. Then, in the tube’s vertex shader, we’ll multiply the original length/radius by the desired values, resulting in the tube getting its final length and radius.

Three.js makes TubeGeometries using a Curve. To give it that length of 1, we’ll create the tube with a lineCurve3 with its endpoint at -1 in the z-axis.

import * as THREE from "three";
export class CarLights {
  constructor(webgl, options) {
    this.webgl = webgl;
    this.options = options;
  }
  init() {
      const options = this.options;
    let curve = new THREE.LineCurve3(
      new THREE.Vector3(0, 0, 0),
      new THREE.Vector3(0, 0, -1)
    );
    let baseGeometry = new THREE.TubeBufferGeometry(curve, 25, 1, 8, false);
    let material = new THREE.MeshBasicMaterial({ color: 0x545454 });
    let mesh = new THREE.Mesh(baseGeometry, material);
	
      this.mesh = mesh;
    this.webgl.scene.add(mesh);
  }
}

Instantiating the lights

Although some lights are longer or thicker than others, they all share the same geometry. Instead of creating a bunch of meshes for each light, and causing lots of draw calls, we can take advantage of instantiation.

Instantiation is the equivalent of telling WebGL “Hey buddy, render this SAME geometry X amount of times”. This process allows you to reduce the amount of draw calls to 1.

Although it’s the same result, rendering X objects, the process is very different. Let’s compare it with buying 50 chocolates at a store:

A draw call is the equivalent of going to the store, buying only one chocolate and then coming back. Then we repeat the process for all 50 chocolates. Paying for the chocolate (rendering) at the store is pretty fast, but going to the store and coming back (draw calls) takes a little bit of time. The more draw calls, the more trips to the store, the more time.

With instantiation, we’re going to the store and buying all 50 chocolates and coming back. You still have to go and come back from the store (draw call) one time. But you saved up those 49 extra trips.

A fun experiment to test this even further: Try to delete 50 different files from your computer, then try to delete just one file of equivalent size to all 50 combined. You’ll notice that even though it’s the same combined file size, the 50 files take more time to be deleted than the single file of equivalent size 😉

Coming back to the code: to instantiate we’ll copy our tubeGeometry over to an InstancedBufferGeometry. Then we’ll tell it how many instances we’ll need. In our case, it’s going to be a number multiplied by 2 because we want two lights per “car”.

Next we’ll have to use that instanced geometry to create our mesh.

class CarLights {
    ...
	init(){
        ...
        let baseGeometry = new THREE.TubeBufferGeometry(curve, 25, 1, 8, false);
        let instanced = new THREE.InstancedBufferGeometry().copy(geometry);
        instanced.maxInstancedCount = options.nPairs * 2;
        ...
        // Use "Instanced" instead of "geometry"
        var mesh = new THREE.Mesh(instanced, material);
    }
}

Although it looks the same, Three.js now rendered 100 tubes in the same position. To move them to their respective positions we’ll use an InstancedBufferAttribute.

While a regular BufferAttribute describes the base shape, for example, it’s position, uvs, and normals, an InstanceBufferAttribute describes each instance of the base shape. In our case, each instance is going to have a different aOffset and a different radius/length aMetrics.

When it’s time each instance passes through the vertex shader. WebGL is going to give us the values corresponding to each instance. Then we can position them using those values.

We’ll loop over all the light pairs and calculate their XYZ position:

  1. For the X-axis we’ll calculate the center of its lane. The width of the car, how separated the lights are, and a random offset.
  2. For its Y-axis, we’ll push it up by its radius to make sure it’s on top of the road.
  3. Finally, we’ll give it a random Z-offset based on the length of the road, putting some lights further away than others.

At the end of the loop, we’ll add the offset twice. Once per each light, with only the x-offset as a difference.

class CarLights {
    ...
    init(){
        ...
        let aOffset = [];

            let sectionWidth = options.roadWidth / options.roadSections;

            for (let i = 0; i < options.nPairs; i++) {
              let radius = 1.;
              // 1a. Get it's lane index
              // Instead of random, keep lights per lane consistent
              let section = i % 3;

              // 1b. Get its lane's centered position
              let sectionX =
                section * sectionWidth - options.roadWifth / 2 + sectionWidth / 2;
              let carWidth = 0.5 * sectionWidth;
              let offsetX = 0.5 * Math.random();

              let offsetY = radius * 1.3;

              aOffset.push(sectionX - carWidth / 2 + offsetX);
              aOffset.push(offsetY);
              aOffset.push(-offsetZ);

              aOffset.push(sectionX + carWidth / 2 + offsetX);
              aOffset.push(offsetY);
              aOffset.push(-offsetZ);
            }
        // Add the offset to the instanced geometry.
        instanced.addAttribute(
          "aOffset",
          new THREE.InstancedBufferAttribute(new Float32Array(aOffset), 3, false)
        );
        ...
    }
}

Now that we've added our aOffset attribute, let's go ahead and use it on a vertex shader like a regular bufferAttribute.

We'll replace our MeshBasicMaterial with a shaderMaterial and create a vertex shader where we'll add aOffset to the position:

class TailLights {
	init(){
		...
		const material = new THREE.ShaderMaterial({
			fragmentShader, 
            vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color('0xfafafa'))
                }
		})
		...
	}
}
const fragmentShader = `
uniform vec3 uColor;
  void main() {
      vec3 color = vec3(uColor);
      gl_FragColor = vec4(color,1.);
  }
`;

const vertexShader = `
attribute vec3 aOffset;
  void main() {
		vec3 transformed = position.xyz;

		// Keep them separated to make the next step easier!
	   transformed.z = transformed.z + aOffset.z;
        transformed.xy += aOffset.xy;
	
        vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
        gl_Position = projectionMatrix * mvPosition;
	}
`;


[https://codesandbox.io/s/infinite-lights-02-road-and-lights-coznb ]

Depending from where you look at the tubes, you'll notice that they might look odd. By default, Three.js' materials don't render the backside of faces side:THREE.FontSide.

While we could fix it by changing it to side: THREE.DoubleSide to render all sides, our tubes are going to be small and fast enough that you won't be able to notice the back faces aren't rendered. We can keep it like that for the sake of performance.

Giving tubes a different length and radius

Creating our tube with a length and radius of 1 was crucial for this section to work. Now we can set the radius and length of each instance only by multiplying on the vertex shader 1 * desiredRadius = desiredRadius.

Let's use the same loop to create a new instancedBufferAttribute called aMetrics. We'll store the length and radius of each instance here.

Remember that wee push to the array twice. One for each of the items in the pair.

class TailLights {
	...
	init(){
	...
	let aMetrics =[];
	for (let i = 0; i < totalLightsPairs; i++) {
     // We give it a minimum value to make sure the lights aren't too thin or short.
     // Give it some randomness but keep it over 0.1
      let radius = Math.random() * 0.1 + 0.1;
     // Give it some randomness but keep it over length *0.02
      let length =
        Math.random() * options.length * 0.08 + options.length * 0.02;
      
      aMetrics.push(radius);
      aMetrics.push(length);

      aMetrics.push(radius);
      aMetrics.push(length);
    }
    instanced.addAttribute(
      "aMetrics",
      new THREE.InstancedBufferAttribute(new Float32Array(aMetrics), 2, false)
    );
    ...
}

Note that we multiplied the position by aMetrics before adding any aOffset. This expands the tubes from their center, and then moves them to their position.

...
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
  void main() {
    vec3 transformed = position.xyz;

            float radius = aMetrics.r;
            float len = aMetrics.g;

            // 1. Set the radius and length
            transformed.xy *= radius; 
            transformed.z *= len;
		
    // 2. Then move the tubes
   transformed.z = transformed.z + aOffset.z;
   transformed.xy += aOffset.xy;

   vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
   gl_Position = projectionMatrix * mvPosition;
}
`;

Positioning the lights

We want to have two roads of lights coming from different directions. Let's create the second TailLights and move each to their respective position. To center them both, we'll move them by half the middle island's width and half the road's width.

We'll also give each light its color, and modify the material to use that instead:

class App {
    constructor(){
        this.leftLights  = new TailLights(this, options, 0xff102a);
        this.rightLights = new TailLights(this, options, 0xfafafa);
    }
	init(){
		...
		
        this.leftLights.init();
        this.leftLights.mesh.position.setX(
           -options.roadWidth / 2 - options.islandWidth / 2
        );
        this.rightLights.init();
        this.rightLights.mesh.position.setX(
           options.roadWidth / 2 + options.islandWidth / 2
        );

	}
}
class TailLights {
	constuctor(webgl, options, color){
		this.color = color;
		...
	}
        init(){
            ...
            const material = new THREE.ShaderMaterial({
                fragmentShader, 
                vertexShader,
                    uniforms: {
                        uColor: new THREE.Uniform(new THREE.Color(this.color))
                    }
            })
            ...
        }
}

Looking great! We can already start seeing how the project is coming together!

Moving and looping the lights

Because we created the tube's curve on the z-axis, moving the lights is only a matter of adding and subtracting from the z-axis. We'll use the elapsed time uTime because time is always moving and it's pretty consistent.

Let's begin with adding a uTime uniform and an update method. Then our App class can update the time on both our CarLights. And finally, we'll add time to the z-axis on the vertex shader:

class TailLights {
	init(){
		...
		cosnt material = new THREE.ShaderMaterial({
			fragmentShader, vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color(this.color)),
                    uTime: new THREE.Uniform(0),
                }
		})
		...
	}
        update(t){
            this.mesh.material.uniforms.uTime.value = t;
        }
}
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
  void main() {
    vec3 transformed = position.xyz;

    float radius = aMetrics.r;
    float len = aMetrics.g;
    transformed.xy *= radius; 
    transformed.z *= len;

            // 1. Add time, and it's position to make it move
            float zOffset = uTime + aOffset.z;
		
            // 2. Then place them in the correct position
            transformed.z += zOffset;

    transformed.xy += aOffset.xy;
	
    vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
    gl_Position = projectionMatrix * mvPosition;
	}
`;
class App {
  ...
  update(delta) {
    let time = this.clock.elapsedTime;
    this.leftLights.update(time);
    this.rightLights.update(time);
  }
}

It moves ultra-slow, but it moves!

Let's create a new uniform uSpeed and multiply it with uTime to make the animation go faster. Because each road has to go to a different side we'll also add it to the CarLights constructor to make it customizable.

class TailLights {
  constructor(webgl, options, color, speed) {
    ...
    this.speed = speed;
  }
	init(){
		...
		cosnt material = new THREE.ShaderMaterial({
			fragmentShader, vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color(this.color)),
                    uTime: new THREE.Uniform(0),
                    uSpeed: new THREE.Uniform(this.speed)
                }
		})
		...
	}
    ...
}
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
  void main() {
    vec3 transformed = position.xyz;

    // 1. Set the radius and length
    float radius = aMetrics.r;
    float len = aMetrics.g;
    transformed.xy *= radius; 
    transformed.z *= len;

    // 2. Add time, and it's position to make it move
        	float zOffset = uTime * uSpeed + aOffset.z;
			
    // 2. Then place them in the correct position
    transformed.z += zOffset;

    transformed.xy += aOffset.xy;
	
    vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
    gl_Position = projectionMatrix * mvPosition;
}
`;

Now that it's fast, let's make it loop.

We'll use the modulo operator mod to find the remainder of z-offset zOffset divided by the total road length uTravelLength. Getting only the remainder makes zOffset loop whenever it goes over uTravelLength.

Then, we'll subtract that from the z-axis and also add the length len to make it loop outside of the camera's view. And that's looping tubes!

Let's go ahead and add the uTravelLength uniform to our material:

class TailLights {
	init(){
		...
		cosnt material = new THREE.ShaderMaterial({
			fragmentShader, vertexShader,
            	uniforms: {
                    uColor: new THREE.Uniform(new THREE.Color(this.color)),
                    uTime: new THREE.Uniform(0),
                    uSpeed: new THREE.Uniform(this.speed)
                    uTime: new THREE.Uniform(0),
                }
		})
		...
	}
}

And let's modify the vertex shaders zOffset to make it loop:

const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
uniform float uTime;
uniform float uSpeed;
uniform float uTravelLength;
  void main() {
    vec3 transformed = position.xyz;
    
    float radius = aMetrics.r;
    float len = aMetrics.g;
    transformed.xy *= radius; 
    transformed.z *= len;

        float zOffset = uTime * uSpeed + aOffset.z;
        // 1. Mod by uTravelLength to make it loop whenever it goes over
        // 2. Add len to make it loop a little bit later
        zOffset = len - mod(zOffset , uTravelLength);

   // Keep them separated to make the next step easier!
   transformed.z = transformed.z +zOffset ;
   transformed.xy += aOffset.xy;

   vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
   gl_Position = projectionMatrix * mvPosition;
}
`;

If you have a hawk's eye for faulty code, you'll noticed the loop isn't perfect. Behind the camera, the tubes go beyond the road's limits (push the camera back to see it in action). But for our use case, it does the job. Imperfect details outside of the camera don't matter.

Going faster and beyond

When holding left click we want our scene to go Speed Racer mode. Faster, and with a wider camera view.

Because the tube's speed is based on time, we'll add an extra offset to time whenever the left click is down. To make this transition extra smooth, we'll use linear interpolation (lerp) for the speedUp variable.

Note: We keep the timeOffset separate from the actual clock's time. Mutating the clock's time is never a good idea.

function lerp(current, target, speed = 0.1, limit = 0.001) {
  let change = (target - current) * speed;
  if (Math.abs(change) < limit) {
    change = target - current;
  }
  return change;
}

class App {
	constructor(){
		...
		this.speedUpTarget = 0.;
		this.speedUp = 0;
		this.timeOffset = 0;
		this.onMouseDown = this.onMouseDown.bind(this);
		this.onMouseUp = this.onMouseUp.bind(this);
	}
	init(){
		...
        this.container.addEventListener("mousedown", this.onMouseDown);
        this.container.addEventListener("mouseup", this.onMouseUp);
        this.container.addEventListener("mouseout", this.onMouseUp);
	}
  onMouseDown(ev) {
    this.speedUpTarget = 0.1;
  }
  onMouseUp(ev) {
    this.speedUpTarget = 0;
  }
  update(delta){
  	
      // Frame-dependent
    this.speedup += lerp(
      this.speedUp,
      this.speedUpTarget,
        // 10% each frame
      0.1,
      0.00001
    );
      // Also frame-dependent
    this.timeOffset += this.speedUp;
      
      
    let time = this.clock.elapsedTime + this.timeOffset;
    ...
    
  }
}

This is a totally functional and valid animation for our super speed mode; after all, it works. But it'll work differently depending on your Frames Per Second (FPS).

Frame rate independent speed up

The issue with the code above is that every frame we are adding a flat amount to the speed. This animation's speed depends on the frame rate.

It means if your frame rate suddenly becomes lower, or your frame rate was low to begin with, the animation is going to become slower as well. And if your frame rate is higher, the animation is going to speed up.

Resulting in the animations running faster or slower or depending on how many frames per second your computer can achieve, a frame rate dependent animation that takes 2 seconds at 30ps, takes 1 second at 60fps.

Our goal is to animate things using real-time. For all computers, the animations should always take X amount of seconds.

Looking back at our code, we have two animations that are frame rate dependent:

  • the speedUp's linear interpolation by 0.1 each frame
  • adding speedUp to timeOffset each frame

Adding speedUp to timeOffset is a linear process; it only depends on the speedup variable. So, we can make it frame rate independent by multiplying it by how many seconds have passed since the last frame (delta).

This one-line change makes the addition one this.speedUp per second. You might need to bump up the speed since the change makes the addition happen through a whole second.

class App {
	update(delta){
		...
         this.timeOffset += this.speedup * delta;		
		...
	} 
 }

Making the speedUp linear interpolation frame rate independent requires a little bit more math.

In the previous case, adding this.speedUp was a linear process, only dependent on the speedUp value. To make it frame rate independent we used another linear process: multiplying it by delta.

In the case of linear interpolation (lerp), we are trying to move towards the target 10% of the difference each time. This is not a linear process but an exponential process. To make it frame rate independent, we need another exponential process that involves delta.

We'll use the functions found in this article about making lerp frame rate independent.

Instead of moving towards the target 10% each frame, we'll move towards the target based on an exponential function based on time delta instead.

let coefficient = 0.1;
let lerpT = Math.exp(-coefficient * delta); 
this.speedup += lerp(
      this.speedup,
      this.speedupTarget,
      lerpT,
      0.00001
    );

This modification completely changes how our coefficient works. Now, a coefficient of 1.0 moves halfway to the target each second.

If we want to use our old coefficients 0.1 that we know already works fine for 60fps, we can convert the old coefficient into the new ones like this:

let coefficient = -60*Math.log2(1 - 0.1);

Plot twist: Math is actually hard. Although there are some great links out there explaining how all the math makes sense, some of it still flies over my head. If you know more about the theory of why all of this works. Feel free to reach out or type it in the comments. I would love to have a chat!

Repeat the process for the Camera's Field Of View camera.fov. And we also get a frame rate independent animation for the fov. We'll reuse the same lerpT to make it easier.

class App {
	constructor(){
		...
        this.fovTarget = 90;
        ...
	}
  onMouseDown(ev) {
    this.fovTarget = 140;
    ...
  }
  onMouseUp(ev) {
    this.fovTarget = 90;
     ...
  }
  update(delta){
      ...
    let fovChange = lerp(this.camera.fov, this.fovTarget, lerpT );
    if (fovChange !== 0) {
      this.camera.fov += fovChange * delta * 6.;
      this.camera.updateProjectionMatrix();
    }
    ...
    
  }
}

Note: Don't forget to update its transformation matrix after you are done with the changes or it won't update in the GPU.

Modularized distortion

The distortion of each object happens on the vertex shader. And as you can see, all objects share the same distortion. But GLSL doesn't have a module system unless you add something like glslify. If you want to reuse and swap pieces of GLSL code, you have to create that system yourself with JavaScript.

Alternatively, if you have only one or two shaders that need distortion, you can always hard code the distortion GLSL code on each mesh's shader. Then, update each one every time you make a change to the distortion. But try to keep track of updating more than two shaders and you start going insane quickly.

In my case, I chose to keep my sanity and create my own little system. This way I could create multiple distortions and play around with the values for the different demos.

Each distortion is an object with three main properties:

  1. distortion_uniforms: The uniforms this distortion is going to need. Each mesh takes care of adding these into their material.
  2. distortion_chunk: The GLSL code that exposes getDistortion function for the shaders that implement it. getDistortion receives a normalized value progress indicating how far into the road is the point. It returns the distortion of that specific position.
  3. (Optional) getJS: The GLSL code ported to JavaScript. This is useful for creating JS interactions following the curve. Like the camera rotating to face the road as we move along.
const distortion_uniforms = {
  uDistortionX: new THREE.Uniform(new THREE.Vector2(80, 3)),
  uDistortionY: new THREE.Uniform(new THREE.Vector2(-40, 2.5))
};

const distortion_vertex = `
#define PI 3.14159265358979
  uniform vec2 uDistortionX;
  uniform vec2 uDistortionY;

    float nsin(float val){
    return sin(val) * 0.5+0.5;
    }
  vec3 getDistortion(float progress){
        progress = clamp(progress, 0.,1.);
        float xAmp = uDistortionX.r;
        float xFreq = uDistortionX.g;
        float yAmp = uDistortionY.r;
        float yFreq = uDistortionY.g;
        return vec3( 
            xAmp * nsin(progress* PI * xFreq   - PI / 2. ) ,
            yAmp * nsin(progress * PI *yFreq - PI / 2.  ) ,
            0.
        );
    }
`;

const myCustomDistortion = {
    uniforms: distortion_uniforms,
    getDistortion: distortion_vertex,
}

Then, you pass the distortion object as a property in the options given when instantiating the main App class like so:

const myApp = new App(
	container, 
	{
        ... // Bunch of other options
		distortion: myCustomDistortion,
        ...
    }
)
...

From here each object can take the distortion from the options and use it as it needs.

Both, the CarLights and Road classes are going to add distortion.uniforms to their material and modify their shader using Three.js' onBeforeCompile:

const material = new THREE.ShaderMaterial({
	...
	uniforms: Object.assign(
		{...}, // The original uniforms of this object
		options.uniforms
	)
})

material.onBeforeCompile = shader => {
  shader.vertexShader = shader.vertexShader.replace(
    "#include ",
    options.distortion.getDistortion
  );
};

Before Three.js sends our shaders to webGL it checks it's custom GLSL to inject any ShaderChunks your shader needs. onBeforeCompile is a function that happens before Three.js compiles your shader into valid GLSL code. Making it easy to extend any built-in materials.

In our case, we'll use onBeforeCompile to inject our distortion's code. Only to avoid the hassle of injecting it another way.

As it stands now, we aren't injecting any code. We first need to add #include <getDistortion_vertex> to our shaders.

In our CarLights vertex shader we need to map its z-position as its distortion progress. And we'll add the distortion after all other math, right at the end:

// Car Lights Vertex shader
const vertexShader = `
attribute vec3 aOffset;
attribute vec2 aMetrics;
uniform float uTime;
uniform float uSpeed;
uniform float uTravelLength;
#include 
  void main() {
	...
        

		// Map z-position to progress: A range of 0 to 1.
        float progress = abs(transformed.z / uTravelLength);
        transformed.xyz += getDistortion(progress);

	
        vec4 mvPosition = modelViewMatrix * vec4(transformed,1.);
        gl_Position = projectionMatrix * mvPosition;
	}
`;

In our Road class, although we see it flat going towards negative-z because we rotated it, this mesh rotation happens after the vertex shader. In the eyes of our shader, our plane is still vertical y-axis and placed in the center of the scene.

To get the correct distortion, we need to map the y-axis as progress. First, we'll un-center it uTravelLength /2., and then we'll normalize it.

Also, instead of adding the y-distortion to the y-axis, we'll add it to the z-axis instead. Remember, in the vertex shader, the rotation hasn't happened yet.

// Road Vertex shader
const vertexShader = `
uniform float uTravelLength;
#include 
	void main(){
        vec3 transformed = position.xyz;
        
	// Normalize progress to a range of 0 to 1
    float progress = (transformed.y + uTravelLength / 2.) / uTravelLength;
    vec3 distortion  = getDistortion(progress);
    transformed.x += distortion.x;
	// z-axis is becomes the y-axis after mesh rotation. 
    transformed.z += distortion.y;
    gl_Position = projectionMatrix * modelViewMatrix * vec4(transformed.xyz, 1.);
	}
`;

An there you have the final result for this tutorial!

Finishing touches

There are a few ways you can expand and better sell the effect of an infinite road in the middle of the night. Like creating more interesting curves and fading the objects into the background with some fog effect to make the lights seem like they are glowing.

Final Thoughts

I find that re-creating things from outside of the web and simply doing some creative coding, opens me up to a wider range of interesting ideas.

In this tutorial, we learned how to instantiate geometries, create frame rate independent animations and modulized distortions. And we brought it all together to re-create and put some motion into this awesome poster!

Hopefully, you've also liked working through this tutorial! Let me know what you think in the comments and feel free to reach out to me!

High-speed Light Trails in Three.js was written by Daniel Velasquez and published on Codrops.


The New Features of GSAP 3

$
0
0

In this article we will explore many of the new features available from GSAP 3. The GreenSock animation library is a JavaScript library many front-end developers turn to because it can be easy to get started and you can create powerful animations with a lot of control. Now with GSAP 3 getting started with GreenSock is even easier.

Some of the new features we will cover in this article are:

  • GreenSock’s smaller file size
  • A Simplified API which offers a newer syntax
  • Defaults in timelines
  • Easier to use with build tools and bundlers
  • Advanced stagger everywhere!
  • Keyframes
  • MotionPath and MotionPath plugin
  • use of Relative “>” and “<” position prefix in place of labels in Timelines
  • The new “effects” extensibility
  • Utility methods

…and more!

GreenSock’s smaller file size

First and foremost the GreenSock library is now even smaller. It still packs all the amazing features I love, plus more (50+ more to be exact). But it is now about half the size! We will see some of the reasons below like the new simplified API but at its core GSAP was completely rebuilt as modern ES modules.

A Simplified API

With the new version of GreenSock we no longer have to decide whether we want to use TweenMax, TweenLite, TimelineMax, or TimelineLite. Now, everything is in a single simplified API so instead of code that looks like this:

TweenMax.to('.box', 1, {
  scale: 0.5,
  y: 20,
  ease: Elastic.easeOut.config( 1, 0.3)
})

We can write this instead:

gsap.to(".box1",{
  duration: 1,
  scale: 0.5,
  y: 20     // or you can now write translateY: 20,
  ease: "elastic(1, 0.3)",
});

Creating Timelines is easier too. Instead of using new TimelineMax() or new TimelineLite() to create a timeline, you now just use gsap.timeline() (simpler for chaining).

Here is an example of the first syntax change. Note that the old syntax still works in GSAP 3 for backward compatibility. According to GreenSock, most legacy code still works great.

See the Pen GreenSock New vs Old syntax by Christina Gorton (@cgorton) on CodePen.dark

Duration

Previously, the animation’s duration was defined as its own parameter directly after the target element. Like this:

TweenMax.to('.box', 1, {})

With the new version, duration is defined in the same vars object as the rest of the properties you animate and therefore is more explicit.

gsap.to(".box",{
  duration: 2,
});

This adds several benefits such as improved readability. After working with and teaching GSAP for a while now, I agree that having an explicit duration property is helpful for anyone new to GreenSock and those of us who are more experienced. This isn’t the only thing the new API improves though. The other benefits will become more obvious when we look at defaults in timelines and the new Keyframes.

Defaults in timelines

This new feature of GSAP is really wonderful for anyone who creates longer animations with gsap.timeline(). In the past when I would create long animations I would have to add the same properties like ease, duration, and more to each element I was animating in a timeline. Now with defaults I can define default properties that will be used for all elements that are animated unless I specify otherwise. This can greatly decrease the amount of code you are writing for each timeline animation.

Let’s take a look at an example:

This Pen shows a couple of the new features in GSAP 3 but for now we will focus on the defaults property.

See the Pen Quidditch motionPath by Christina Gorton (@cgorton) on CodePen.dark

I use defaults in a few places in this pen but one timeline in particular shows off its power. At the beginning of this timeline I set defaults for the duration, ease, yoyo, repeat, and the autoAlpha property. Now instead of writing the same properties for each tween I can write it one time.

const moving = () => {
  let tl = new gsap.timeline({ 
    defaults: { 
      duration: .02,  
      ease: "back(1.4)", 
      yoyo: true, 
      repeat: 1, 
      autoAlpha: 1 
    }
  })
  tl.to('.wing1',{})
    .to('.wing2',{})
    .to('.wing3',{})
  
  return tl;
}

Without the defaults my code for this timeline would look like this:

const moving = () => {
  let tl = gsap.timeline()
  
  tl.to('.wing1',{
    duration: .02,  
    ease: "back(1.4)", 
    yoyo: true, 
    repeat: 1, 
    autoAlpha: 1
  })
  .to('.wing2',{
    duration: .02,  
    ease: "back(1.4)", 
    yoyo: true, 
    repeat: 1, 
    autoAlpha: 1
  })
  .to('.wing3',{
    duration: .02,  
    ease: "back(1.4)", 
    yoyo: true, 
    repeat: 1, 
    autoAlpha: 1
  })

  return tl;
}

That is around a 10 line difference in code!

Use of Relative > and < position prefix in place of labels in Timelines

This is another cool feature to help with your timeline animations. Typically when creating a timeline I create labels that I then use to add delays or set the position of my Tweens.

As an example I would use tl.add() to add a label then add it to my tween along with the amount of delay I want to use relative to that label.

The way I previously used labels would look something like this:

gsap.timeline()
  .add("s")
  .to(“.box1", { ... }, "s")
  .to(“.box2", { ... }, "s")
  .to(“.box3", { ... }, "s+=0.8")
  .to(“.box4", { ... }, "s+=0.8”);

See an example here.

With > and < you no longer need to add a label.

From the GreenSock docs:

"Think of them like pointers - "<" points to the start, ">" points to the end (of the most recently-added animation)."

  • "<" references the most recently-added animation's START time
  • ">" references the most recently-added animation's END time

So now a timeline could look more like this:

gsap.timeline()
  .to(“.box1", { ... })
  .to(“.box2", { ... }, "<")
  .to(“.box3", { ... }, "<0.8")
  .to(“.box4", { ... }, "<”);

And you can offset things with numbers like I do in this example:

See the Pen MotionPath GreenSock v3 by Christina Gorton (@cgorton) on CodePen.dark

Stagger all the things

Previously in GSAP to stagger animations you had to define it at the beginning of a tween with either a staggerTo(), staggerFrom(), or staggerFromTo() method. In GSAP 3 this is no longer the case. You can simply define your stagger in the vars object like this:

tl.to(".needle",{
  scale: 1,
  delay:0.5,
  stagger: 0.5 //simple stagger of 0.5 seconds
},"start+=1")

...or for a more advanced stagger you can add extra properties like this:

tl.to(".needle",{
  scale: 1,
  delay:0.5,
  stagger: {
    amount: 0.5, //  the total amount of time (in seconds) that gets split up among all the staggers. 
    from: "center" // the position in the array from which the stagger will emanate
  }
},"start+=1")

This animation uses staggers in several places. like the needles. Check out all the staggers in this pen:

See the Pen Cute Cactus stagger by Christina Gorton (@cgorton) on CodePen.dark

Easier to use with build tools and bundlers

When I have worked on Vue or React projects in the past working with GreenSock could be a little bit tricky depending on the features I wanted to use.

For example in this Codesandbox I had to import in TweenMax, TimelineMax and any ease that I wanted to use.

import { TweenMax, TimelineMax, Elastic, Back} from "gsap";

Now with GSAP 3 my import looks like this:

import gsap from "gsap";

You no longer have to add named imports for each feature since they are now in one simplified API. You may still need to import extra plugins for special animation features like morphing, scrollTo, motion paths, etc.

Keyframes

If you have ever worked with CSS animations then keyframes will be familiar to you.

So what are keyframes for in GreenSock?

In the past if you wanted to animate the same set of targets to different states sequentially (like "move over, then up, then spin"), you would need to create a new tween for each part of the sequence. The new keyframes feature lets us do that in one Tween!

With This property you can pass an array of keyframes in the same vars objects where you typically define properties to animate and the animations will be nicely sequenced. You can also add delays that will either add gaps (positive delay) or overlaps (negative delay).

Check out this example to see the keyframes syntax and the use of delays to overlap and add gaps in the animation.

See the Pen GreenSock Keyframes by Christina Gorton (@cgorton) on CodePen.dark

MotionPath and MotionPath helper plugin

One of the features I am most excited about is MotionPathPlugin and the MotionPathHelper. In the past I used MorphSVGPlugin.pathDataToBezier to animate objects along a path. Here is an example of that plugin:

See the Pen MorphSVGPlugin.pathDataToBezier with StaggerTo and Timeline by Christina Gorton (@cgorton) on CodePen.dark

But the MotionPathPlugin makes it even easier to animate objects along a path. You can create a path for your elements in two ways:

  • With an SVG path you create
  • Or with manual points you define in your JavaScript

The previous Quidditch pen I shared uses MotionPathPlugin in several places. First you need to register it like this:

//register the plugin
gsap.registerPlugin(MotionPathPlugin);

Note: the MotionPathHelper plugin is a premium feature of GreenSock and is available to Club GreenSock members but you can try it out for free on CodePen.

I used an SVG editor to create the paths in the Quidditch animation and then I was able to tweak them directly in the browser with the MotionPathHelper! The code needed to add the MotionPathHelper is this

MotionPathHelper.create(element)

Screen Shot 2019-11-13 at 4.52.09 PM

I then clicked "COPY MOTION PATH" and saved the results in variables that get passed to my animation(s).

Paths created with the MotionPathPlugin helper

const path = "M-493.14983,-113.51116 C-380.07417,-87.16916 -266.9985,-60.82716 -153.92283,-34.48516 -12.11783,-77.91982 129.68717,-121.35449 271.49217,-164.78916 203.45853,-70.96417 186.21594,-72.24109 90.84294,-69.64709   ",
      path2 ="M86.19294,-70.86509 C64.53494,-36.48609 45.53694,-13.87709 -8.66106,-8.17509 -23.66506,-40.23009 -30.84506,-44.94009 -30.21406,-88.73909 6.79594,-123.26109 54.23713,-91.33418 89.94877,-68.52617 83.65113,-3.48218 111.21194,-17.94209 114.05694,18.45191 164.08394,33.81091 172.43213,34.87082 217.26913,22.87582 220.68213,-118.72918 95.09713,-364.56718 98.52813,-506.18118  ",
      path3 = "M-82.69499,-40.08529 C-7.94199,18.80104 66.81101,77.68738 141.56401,136.57371 238.08201,95.81004 334.60001,55.04638 431.11801,14.28271 ",
      path4 = "M126.51311,118.06986 C29.76678,41.59186 -66.97956,-34.88614 -163.72589,-111.36414 -250.07922,-59.10714 -336.43256,-6.85014 -422.78589,45.40686 ";

Example of a path passed in to animation

const hover = (rider, path) => {
  let tl = new gsap.timeline();
  tl.to(rider, {
    duration: 1,
    ease: "rider",
    motionPath:{
      path: path,
    }
  })
  return tl
}

In this timeline I set up arguments for the rider and the path so I could make it reusable. I add which rider and which path I want the rider to follow in my master timeline.

.add(hover("#cho", path3),'start+=0.1')
.add(hover("#harry", path4),'start+=0.1')

If you want to see the paths and play around with the helper plugin you can uncomment the code at the bottom of the JavaScript file in this pen:

See the Pen Quidditch motionPath by Christina Gorton (@cgorton) on CodePen.dark

Or, in this pen you can check out the path the wand is animating on:

See the Pen MotionPath GreenSock v3 by Christina Gorton (@cgorton) on CodePen.dark

Effects

According the the GreenSock Docs:

Effects make it easy for anyone to author custom animation code wrapped in a function (which accepts targets and a config object) and then associate it with a specific name so that it can be called anytime with new targets and configurations

So if you create and register an effect you reuse it throughout your codebase.

In this example I created a simple effect that makes the target "grow". I create the effect once and can now apply it to any element I want to animate. In this case I apply it to all the elements with the class ".box"

See the Pen GreenSock Effects by Christina Gorton (@cgorton) on CodePen.dark

Utility methods

Lastly, I'll cover the utility methods which I have yet to explore extensively but they are touted as a way to help save you time and accomplish various tasks that are common with animation.

For example, you can feed any two similarly-typed values (numbers, colors, complex strings, arrays, even multi-property objects) into the gsap.utils.interpolate() method along with a progress value between 0 and 1 (where 0.5 is halfway) and it'll return the interpolated value accordingly. Or select a random() value within an array or within a specific range, optionally snapping to whatever increment you want.

Most of the 15 utility methods that can be used separately, combined, or plugged directly into animations. Check out the docs for details.

Below I set up one simple example using the distribute() utility which:

Distributes an amount across the elements in an array according to various configuration options. Internally, it’s what advanced staggers use, but you can apply it for any value. It essentially assigns values based on the element’s position in the array (or in a grid)

See the Pen GreenSock Utility Methods by Christina Gorton (@cgorton) on CodePen.dark

For an even more impressive example check out Craig Roblewsky's pen that uses the distribute() and wrap() utility methods along with several other GSAP 3 features like MotionPathPlugin:

See the Pen MotionPath Distribute GSAP 3.0 by Craig Roblewsky (@PointC) on CodePen.dark

That wraps up the features we wanted to cover in this article. For the full list of changes and features check out this page and the GreenSock docs. If you'd like to know what old v2 code isn't compatible with v3, see GreenSock's list. But there's not much as GSAP 3 is surprisingly backward-compatible given all the improvements and changes.

References

All of the Pens from this article can be found in this collection.

For more examples check out GreenSock's Showcase and Featured GSAP 3 Pens collection.

The New Features of GSAP 3 was written by Christina Gorton and published on Codrops.

Collective #565

$
0
0



C565_2019

The 2019 Web Almanac

The Web Almanac is an annual state of the web report combining the expertise of the web community with the data and trends of the HTTP Archive.

Check it out







C565_gauges

Gauges

Amelia Wattenberger coded up a gauge example from Fullstack D3’s Dashboard Design chapter as a React component.

Check it out









C565_innerwolf

My Inner Wolf

An eclectic visual composition of our inner worlds: a project on absence epilepsy seizures by Moniker in collaboration with Maartje Nevejan.

Check it out








Collective #565 was written by Pedro Botelho and published on Codrops.

Collective #566

$
0
0


C566_gifolio

Gifolio

Gifolio is a brilliant collection of design portfolios presented using animated GIFs. By Roll Studio.

Check it out



C566_masks

Masks

An interactive presentation on masking techniques originally created for a Creative Front-end Belgium meetup hosted by Reed. By Thomas Di Martino.

Check it out




C566_mike

Supermaya

Supermaya is an Eleventy starter kit designed to help you add rich features to a blog or website without the need for a complicated build process.

Check it out


C566_darkmode

Dark Mode

Varun Vachhar shares the challenges he encountered when migrating from Jekyll to Gatsby related to dark mode.

Read it












C566_fresh

Fresh Folk

A beautiful mix-and-match illustration library of people and objects made by Leni Kauffman.

Check it out


C566_arcticvault

GitHub Archive Program

The GitHub Archive Program will safely store every public GitHub repo for 1,000 years in the Arctic World Archive in Svalbard, Norway.

Check it out



Collective #566 was written by Pedro Botelho and published on Codrops.

Inspirational Websites Roundup #10

$
0
0

Here’s a little collection to get you out of your creative slump. These lush new websites shine with their clean typography and their perfect balance of colors and shapes. Some are truly unique in the way of getting their message accross, be it with layout effects or fluid surprises. We hope to get you inspired and updated on the current trends! Enjoy!

Folio of Alex van Zijl

Francesco Michelini

Princeton University Press

The story of the white tower

Editorial New

Discovery Land Company

Martin Laxenaire

Spatzek Studio

Digital Design Days

Soletanche Bachy

Bruno Simon

Flavien Guilbaud

Wecargo

Universal Sans

Castor & Pollux

Keio University

CM-Tourisme

D.Potfer Studio

Myles Nguyen

Beauvoir

Born & Bred

Reform Collective

UT

Bruno Arizio

Jakub cech

HAJINSKY Magazine

Andstudio

Akaru

Inspirational Websites Roundup #10 was written by Mary Lou and published on Codrops.

Collective #567

$
0
0



Our Sponsor

Black Friday Is Coming

Not only do you get the best deal ever on Divi memberships and upgrades, but you can also win a Mac Pro worth over $6,000!

Enter now




Pika Registry

Pika is a new kind of package registry and code editor for package authors. Open for early access.

Check it out












Tetris & Snake

Can you play Tetris and Snake at the same time? Try it in this cool experiment by Grégoire Divaret-Chauveau.

Check it out


LegraJS

Legra is a small JavaScript library that lets you draw LEGO like brick shapes on an HTML canvas element.

Check it out




Collective #567 was written by Pedro Botelho and published on Codrops.

Collective #568

$
0
0

Global Design Survey 2019

Explore the data behind designer salaries, career trends, and what’s next for your design discipline in these design survey results by Dribbble.

Check it out



Cockatiel

A resilience and transient-fault-handling library that allows developers to express policies such as Backoff, Retry, Circuit Breaker, Timeout, Bulkhead Isolation, and Fallback. Inspired by .NET Polly.

Check it out



















Collective #568 was written by Pedro Botelho and published on Codrops.

Creating a Distorted Mask Effect on an Image with Babylon.js and GLSL

$
0
0

Nowadays, it’s really hard to navigate the web and not run into some wonderful website that has some stunning effects that seem like black magic.

Well, many times that “black magic” is in fact WebGL, sometimes mixed with a bit of GLSL. You can find some really nice examples in this Awwwards roundup, but there are many more out there.

Recently, I stumbled upon the Waka Waka website, one of the latest works of Ben Mingo and Aristide Benoist, and the first thing I noticed was the hover effect on the images.

It was obvious that it’s WebGL, but my question was: “How did Aristide do that?”

Since I love to deconstruct WebGL stuff, I tried to replicate it, and in the end I’ve made it.

In this tutorial I’ll explain how to create an effect really similar to the one in the Waka Waka website using Microsoft’s BabylonJS library and some GLSL.

This is what we’ll do.

The setup

The first thing we have to do is create our scene; it will be very basic and will contain only a plane to which we’ll apply a custom ShaderMaterial.

I won’t cover how to setup a scene in BabylonJS, for that you can check its comprehensive documentation.

Here’s the code that you can copy and paste:

import { Engine } from "@babylonjs/core/Engines/engine";
import { Scene } from "@babylonjs/core/scene";
import { Vector3 } from "@babylonjs/core/Maths/math";
import { ArcRotateCamera } from "@babylonjs/core/Cameras/arcRotateCamera";
import { ShaderMaterial } from "@babylonjs/core/Materials/shaderMaterial";
import { Effect } from "@babylonjs/core/Materials/effect";
import { PlaneBuilder } from "@babylonjs/core/Meshes/Builders/planeBuilder";

class App {
  constructor() {
    this.canvas = null;
    this.engine = null;
    this.scene = null;
  }

  init() {
    this.setup();
    this.addListeners();
  }

  setup() {
    this.canvas = document.querySelector("#app");
    this.engine = new Engine(this.canvas, true, null, true);
    this.scene = new Scene(this.engine);

    // Adding the vertex and fragment shaders to the Babylon's ShaderStore
    Effect.ShadersStore["customVertexShader"] = require("./shader/vertex.glsl");
    Effect.ShadersStore[
      "customFragmentShader"
    ] = require("./shader/fragment.glsl");

    // Creating the shader material using the `custom` shaders we added to the ShaderStore
    const planeMaterial = new ShaderMaterial("PlaneMaterial", this.scene, {
      vertex: "custom",
      fragment: "custom",
      attributes: ["position", "normal", "uv"],
      uniforms: ["worldViewProjection"]
    });
    planeMaterial.backFaceCulling = false;

    // Creating a basic plane and adding the shader material to it
    const plane = new PlaneBuilder.CreatePlane(
      "Plane",
      { width: 1, height: 9 / 16 },
      this.scene
    );
    plane.scaling = new Vector3(7, 7, 1);
    plane.material = planeMaterial;

    // Camera
    const camera = new ArcRotateCamera(
      "Camera",
      -Math.PI / 2,
      Math.PI / 2,
      10,
      Vector3.Zero(),
      this.scene
    );

    this.engine.runRenderLoop(() => this.scene.render());
  }

  addListeners() {
    window.addEventListener("resize", () => this.engine.resize());
  }
}

const app = new App();
app.init();

As you can see, it’s not that different from other WebGL libraries like Three.js: it sets up a scene, a camera, and it starts the render loop (otherwise you wouldn’t see anything).

The material of the plane is a ShaderMaterial for which we’ll have to create its respective shader files.

// /src/shader/vertex.glsl

precision highp float;

// Attributes
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;

// Uniforms
uniform mat4 worldViewProjection;

// Varyings
varying vec2 vUV;

void main(void) {
    gl_Position = worldViewProjection * vec4(position, 1.0);
    vUV = uv;
}
// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec3 color = vec3(vUV.x, vUV.y, 0.0);
  gl_FragColor = vec4(color, 1.0);
}

You can forget about the vertex shader since for the purpose of this tutorial we’ll work only on the fragment shader.

Here you can see it live:

Good, we’ve already written 80% of the JavaScript code we need for the purpose of this tutorial.

The logic

GLSL is cool, it allows you to create stunning effects that would be impossible to do with HTML, CSS and JS alone. It’s a completely different world, and if you’ve always done “web” stuff you’ll get confused at the beginning, because when working with GLSL you have to think in a completely different way to achieve any effect.

The logic behind the effect we want to achieve is pretty simple: we have two overlapping images, and the image that overlaps the other one has a mask applied to it.

Simple, but it doesn’t work like SVG masks for instance.

Adjusting the fragment shader

Before going any further we need to tweak the fragment shader a little bit.

As for now, it looks like this:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec3 color = vec3(vUV.x, vUV.y, 0.0);
  gl_FragColor = vec4(color, 1.0);
}

Here, we’re telling the shader to assign each pixel a color whose channels are determined by the value of the x coordinate for the Red channel and the y coordinate for the Green channel.

But we need to have the origin at the center of the plane, not the bottom-left corner. In order to do so we have to refactor the declaration of uv this way:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec2 uv = vUV - 0.5;
  vec3 color = vec3(uv.x, uv.y, 0.0);
  gl_FragColor = vec4(color, 1.0);
}

This simple change will result into the following:

This is becase we moved the origin from the bottom left corner to the center of the plane, so uv‘s values go from -0.5 to 0.5. Since you cannot assign negative values to RGB channels, the Red and Green channels fallback to 0.0 on the whole bottom left area.

Creating the mask

First, let’s change the color of the plane to complete black:

// /src/shader/fragment.glsl

precision highp float;

// Varyings
varying vec2 vUV;

void main() {
  vec2 uv = vUV - 0.5;
  vec3 color = vec3(0.0);
  gl_FragColor = vec4(color, 1.0);
}

Now let’s add a rectangle that we will use as the mask for the foreground image.

Add this code outside the main() function:

vec3 Rectangle(in vec2 size, in vec2 st, in vec2 p, in vec3 c) {
  float top = step(1. - (p.y + size.y), 1. - st.y);
  float right = step(1. - (p.x + size.x), 1. - st.x);
  float bottom = step(p.y, st.y);
  float left = step(p.x, st.x);
  return top * right * bottom * left * c;
}

(How to create shapes is beyond of the scope of this tutorial. For that, I suggest you to read this chapter of “The Book of Shaders”)

The Rectangle() function does exactly what its name says: it creates a rectangle based on the parameters we pass to it.

Then, we redeclare the color using that Rectangle() function:

vec2 maskSize = vec2(0.3, 0.3);

// Note that we're subtracting HALF of the width and height to position the rectangle at the center of the scene
vec2 maskPosition = vec2(-0.15, -0.15);
vec3 maskColor =  vec3(1.0);

color = Rectangle(maskSize, uv, maskPosition, maskColor);

Awesome! We now have our black plane with a beautiful white rectangle at the center.

But, wait! That’s not supposed to be a rectangle; we set its size to be 0.3 on both the width and the height!

That’s because of the ratio of our plane, but it can be easily fixed in two simple steps.

First, add this snippet to the JS file:

this.scene.registerBeforeRender(() => {
  plane.material.setFloat("uPlaneRatio", plane.scaling.x / plane.scaling.y);
});

And then, edit the shader by adding this line at the top of the file:

uniform float uPlaneRatio;

…and this line too, right below the line that sets the uv variable

uv.x *= uPlaneRatio;

Short explanation

In the JS file, we’re sending a uPlaneRatio uniform (one of the GLSL data type) to the fragment shader, whose value is the ratio between the plane width and height.

We made the fragment shader wait for that uniform by declaring it at the top of the file, then the shader uses it to adjust the uv.x value.


Here you can see the final result: a black plane with a white square at the center; nothing too fancy (yet), but it works!

Adding the foreground image

Displaying an image in GLSL is pretty simple. First, edit the JS code and add the following lines:

// Import the `Texture` module from BabylonJS at the top of the file
import { Texture } from '@babylonjs/core/Materials/Textures/texture'
// Add this After initializing both the plane mesh and its material
const frontTexture = new Texture('src/images/lantern.jpg')
plane.material.setTexture("u_frontTexture", frontTexture)

This way, we’re passing the foreground image to the fragment shader as a Texture element.

Now, add the following lines to the fragment shader:

// Put this at the beginninng of the file, outside of the `main()` function
uniform sampler2D u_frontTexture;
// Put this at the bottom of the `main()` function, right above `gl_FragColor = ...`
vec3 frontImage = texture2D(u_frontTexture, uv * 0.5 + 0.5).rgb;

A bit of explaining:

We told BabylonJS to pass the texture to the shader as a sampler2D with the setTexture() method, and then, we made the shader know that we will pass that sampler2D whose name is u_frontTexture.

Finally, we created a new variable of type vec3 named frontImage that contains the RGB values of our texture.

By default, a texture2D is a vec4 variable (it contains the r, g, b and a values), but we don’t need the alpha channel so we declare frontImage as a vec3 variable and explicitly get only the .rgb channels.

Please also note that we’ve modified the UVs of the texture by first multiplying it by 0.5 and then adding 0.5 to it. This is because at the beginning of the main() function I’ve remapped the coordinate system to -0.5 -> 0.5, and also because of the fact that we had to adjust the value of uv.x.


If you now add this to the GLSL code…

color = frontImage;

…you will see our image, rendered by a GLSL shader:

Masking

Always keep in mind that, for shaders, everything is a number (yes, even images), and that 0.0 means completely hidden while 1.0 stands for fully visible.

We can now use the mask we’ve just created to hide the parts of our image where the value of the mask equals 0.0.

With that in mind, it’s pretty easy to apply our mask. The only thing we have to do is multiply the color variable by the value of the mask:

// The mask should be a separate variable, not set as the `color` value
vec3 mask = Rectangle(maskSize, uv, maskPosition, maskColor);

// Some super magic trick
color = frontImage * mask;

Et voilà, we now have a fully functioning mask effect:

Let’s enhance it a bit by making the mask follow a circular path.

In order to do that we must go back to our JS file and add a couple of lines of code.

// Add this to the class constructor
this.time = 0
// This goes inside the `registerBeforeRender` callback
this.time++;
plane.material.setFloat("u_time", this.time);

In the fragment shader, first declare the new uniform at the top of the file:

uniform float u_time;

Then, edit the declaration of maskPosition like this:

vec2 maskPosition = vec2(
  cos(u_time * 0.05) * 0.2 - 0.15,
  sin(u_time * 0.05) * 0.2 - 0.15
);

u_time is simply one of the uniforms that we pass to our shader from the WebGL program.

The only difference with the u_frontTexture uniform is that we increase its value on each render loop and pass its new value to the shader, so that it updates the mask’s position.

Here’s a live preview of the mask going in a circle:

Adding the background image

In order to add the background image we’ll do the exact opposite of what we did for the foreground image.

Let’s go one step at a time.

First, in the JS class, pass the shader the background image in the same way we did for the foreground image:

const backTexture = new Texture("src/images/lantern-bw.jpg");
plane.material.setTexture("u_backTexture", backTexture);

Then, tell the fragment shader that we’re passing it that u_backTexture and initialize another vec3 variable:

// This goes at the top of the file
uniform sampler2D backTexture;

// Add this after `vec3 frontImage = ...`
vec3 backgroundImage = texture2D(iChannel1, uv * 0.5 + 0.5).rgb;

When you do a quick test by replacing

color = frontImage * mask;

with

color = backImage * mask;

you’ll see the background image.

But for this one, we have to invert the mask to make it behave the opposite way.

Inverting a number is really easy, the formula is:

invertedNumber = 1 - <number>

So, let’s apply the inverted mask to the background image:

backImage *= (1.0 - mask);

Here, we’re applying the same mask we added to the foreground image, but since we inverted it, the effect is the opposite.

Putting it all together

At this point, we can refactor the declaration of the two images by directly applying their masks.

vec3 frontImage = texture2D(u_frontTexture, uv * 0.5 + 0.5).rgb * mask;
vec3 backImage = texture2D(u_backTexture, uv * 0.5 + 0.5).rgb * (1.0 - mask);

We can now display both images by adding backImage to frontImage:

color = backImage + frontImage;

That’s it, here’s a live example of the desired effect:

Distorting the mask

Cool uh? But it’s not over yet! Let’s tweak it a bit by distorting the mask.

To do so, we first have to create a new vec2 variable:

vec2 maskUV = vec2(
  uv.x + sin(u_time * 0.03) * sin(uv.y * 5.0) * 0.15,
  uv.y + cos(u_time * 0.03) * cos(uv.x * 10.0) * 0.15
);

Then, replace uv with maskUV in the mask declaration

vec3 mask = Rectangle(maskSize, maskUV, maskPosition, maskColor);

In maskUV, we’re using some math to add uv values based on the u_time uniform and the current uv.

Try tweaking those values by yourself to see different effects.

Distorting the foreground image

Let’s now distort the foreground image the same way we did for the mask, but with slightly different values.

Create a new vec2 variable to store the foreground image uvs:

vec2 frontImageUV = vec2(
  (uv.x + sin(u_time * 0.04) * sin(uv.y * 10.) * 0.03),
  (uv.y + sin(u_time * 0.03) * cos(uv.x * 15.) * 0.05)
);

Then, use that frontImageUV instead of the default uv when declaring frontImage:

vec3 frontImage = texture2D(u_frontTexture, frontImageUV * 0.5 + 0.5).rgb * mask;

Voilà! Now both the mask and the image have a distortion effect applied.

Again, try tweaking those numbers to see how the effect changes.

10 – Adding mouse control

What we’ve made so far is really cool, but we could make it even cooler by adding some mouse control like making it fade in/out when the mouse hovers/leaves the plane and making the mask follow the cursor.

Adding fade effects

In order to detect the mouseover/mouseleave events on a mesh and execute some code when those events occur we have to use BabylonJS’s actions.

Let’s start by importing some new modules:

import { ActionManager } from "@babylonjs/core/Actions/actionManager";
import { ExecuteCodeAction } from "@babylonjs/core/Actions/directActions";
import "@babylonjs/core/Culling/ray";

Then add this code after the creation of the plane:

this.plane.actionManager = new ActionManager(this.scene);

this.plane.actionManager.registerAction(
  new ExecuteCodeAction(ActionManager.OnPointerOverTrigger, () =>
    this.onPlaneHover()
  )
);

this.plane.actionManager.registerAction(
  new ExecuteCodeAction(ActionManager.OnPointerOutTrigger, () =>
    this.onPlaneLeave()
  )
);

Here we’re telling the plane’s ActionManager to listen for the PointerOver and PointerOut events and execute the onPlaneHover() and onPlaneLeave() methods, which we’ll add right now:

onPlaneHover() {
  console.log('hover')
}

onPlaneLeave() {
  console.log('leave')
}

Some notes about the code above

Please note that I’ve used this.plane instead of just plane; that’s because we’ll have to access it from within the mousemove event’s callback later, so I’ve refactored the code a bit.

ActionManager allows us to listen to certain events on a target, in this case the plane.

ExecuteCodeAction is a BabylonJS action that we’ll use to execute some arbitrary code.

ActionManager.OnPointerOverTrigger and ActionManager.OnPointerOutTrigger are the two events that we’re listening to on the plane. They behave exactly like the mouseenter and mouseleave events for DOM elements.

To detect hover events in WebGL, we need to “cast a ray” from the position of the mouse to the mesh we’re checking; if that ray, at some point, intersects with the mesh, it means that the mouse is hovering it. This is why we’re importing the @babylonjs/core/Culling/ray module; BabylonJS will take care of the rest.


Now, if you test it by hovering and leaving the mesh, you’ll see that it logs hover and leave.

Now, let’s add the fade effect. For this, I’ll use the GSAP library, which is the de-facto library for complex and high-performant animations.

First, install it:

yarn add gsap

Then, import it in our class

import gsap from 'gsap

and add this line to the constructor

this.maskVisibility = { value: 0 };

Finally, add this line to the registerBeforeRender()‘s callback function

this.plane.material.setFloat( "u_maskVisibility", this.maskVisibility.value);

This way, we’re sending the shader the current value property of this.maskVisibility as a new uniform called u_maskVisibility.

Refactor the fragment shader this way:

// Add this at the top of the file, like any other uniforms
uniform float u_maskVisibility;

// When declaring `maskColor`, replace `1.0` with the `u_maskVisibility` uniform
vec3 maskColor = vec3(u_maskVisibility);

If you now check the result, you’ll see that the foreground image is not visible anymore; what happened?

Do you remember when I wrote that “for shaders, everything is a number”? That’s the reason! The u_maskVisibility uniform equals 0.0, which means that the mask is invisible.

We can fix it in few lines of code. Open the JS code and refactor the onPlaneHover() and onPlaneLeave() methods this way:

onPlaneHover() {
  gsap.to(this.maskVisibility, {
    duration: 0.5,
    value: 1
  });
}

onPlaneLeave() {
  gsap.to(this.maskVisibility, {
    duration: 0.5,
    value: 0
  });
}

Now, when you hover or leave the plane, you’ll see that the mask fades in and out!

(And yes, BabylonJS has it’s own animation engine, but I’m way more confident with GSAP, that’s why I opted for it.)

Make the mask follow the mouse cursor

First, add this line to the constructor

this.maskPosition = { x: 0, y: 0 };

and this to the addListeners() method:

window.addEventListener("mousemove", () => {
  const pickResult = this.scene.pick(
    this.scene.pointerX,
    this.scene.pointerY
  );

  if (pickResult.hit) {
    const x = pickResult.pickedPoint.x / this.plane.scaling.x;
    const y = pickResult.pickedPoint.y / this.plane.scaling.y;

    this.maskPosition = { x, y };
  }
});

What the code above does is pretty simple: on every mousemove event it casts a ray with this.scene.pick() and updates the values of this.maskPosition if the ray is intersecting something.

(Since we have only a single mesh we can avoid checking what mesh is being hit by the ray.)

Again, on every render loop, we send the mask position to the shader, but this time as a vec2. First, import the Vector2 module together with Vector3

import { Vector2, Vector3 } from "@babylonjs/core/Maths/math";

Add this in the runRenderLoop callback function

this.plane.material.setVector2(
  "u_maskPosition",
  new Vector2(this.maskPosition.x, this.maskPosition.y)
);

Add the u_maskPosition uniform at the top of the fragment shader

uniform vec2 u_maskPosition;

Finally, refactor the maskPosition variable this way

vec3 maskPosition = vec2(
  u_maskPosition.x * uPlaneRatio - 0.15,
  u_maskPosition.y - 0.15
);

Side note; I’ve adjusted the x using the uPlaneRatio value because at the beginning of the main() function I did the same with the shader’s uvs

And here you can see the result of your hard work:

Conclusion

As you can see, doing these kind of things doesn’t involve too much code (~150 lines of JavaScript and ~50 lines of GLSL, including comments and empty lines); the hard part with WebGL is the fact that it’s complex by nature, and it’s a very vast subject, so vast that many times I don’t even know what to search on Google when I get stuck.

Also, you have to study a lot, way more than with “standard” website development. But in the end, it’s really fun to work with.

In this tutorial, I tried to explain the whole process (and the reasoning behind everything) step by step, just like I want someone to explain it to me; if you’ve reached this point of this tutorial, it means that I’ve reached my goal.

In any case, thanks!

Credits

The lantern image is by Vladimir Fetodov

Creating a Distorted Mask Effect on an Image with Babylon.js and GLSL was written by Francesco Michelini and published on Codrops.


Collective #569

$
0
0




C569_overlapimagescss

How to Overlap Images in CSS

A great article by Bri Camp Gomez where she shows how to overlap images with CSS Grid and provide a fallback for non-supportive browsers.

Read it


C569_flowy

Flowy

Alyssa X made this minimal JavaScript library for creating beautiful flowcharts.

Check it out









C569_lib

AppLibsList

A categorized collection of trending and most commonly used libraries and components for ReactJS developers.

Check it out




C569_chatbot

Peekobot

Peekobot is a simple choice-driven chatbot framework in less than 100 lines of JavaScript. Made by Ross Wintle.

Check it out



C569_whocanuse

Who Can Use

Find out who can use your color combination by checking the WCAG grading and contrast ratio.

Check it out



Collective #569 was written by Pedro Botelho and published on Codrops.

Collective #570

$
0
0

Lighthouse CI

Lighthouse CI is a set of commands that make continuously running, asserting, saving, and retrieving Lighthouse results as easy as possible.

Check it out


Our Sponsor

The Divi Cyber Monday Sale 2019

If you are waiting for the perfect time to join the Divi community or the best time to upgrade your current account to Lifetime, this is it! Don’t miss your chance because a better deal than this doesn’t exist!

Get the deal



Bekk Christmas

Bekk is creating twelve calendars, each with daily content, articles and podcasts around front-end, coding, UX and more.

Check it out


Blocks UI

A JSX-based page builder for creating beautiful websites without writing code. It’s currently in early alpha and only supports a constrained subset of JSX source code.

Check it out




Diagram.Codes

A tool that lets you describe diagrams with a simple text language and automatically generate exportable images.

Check it out



Blooom

An experimental project by Tom Pickering to showcase his exploration of modern web technologies through creative coding.

Check it out



Patchbay

Patchbay.pub is a free web service you can use to implement things like static site hosting, file sharing, cross-platform notifications, and much more.

Check it out











Fibery

A fun way to present a product: Fibery’s “honest” landing page. They actually have a pretty cool tool.

Check it out

Collective #570 was written by Pedro Botelho and published on Codrops.

Motion Paths – Past, Present and Future

$
0
0

Making animations that “feel right” can be tricky.

When I’m stuck, I find Disney’s 12 principles of animation useful. They’re from the book ‘The Illusion of Life’ and although the book was written about hand-drawn character animation, a lot of the principles are relevant for animation on the web.

The 7th principle of animation is about arcs:

Most natural action tends to follow an arched trajectory, and animation should adhere to this principle by following implied “arcs” for greater realism.

In other words, animating along a curved path can make movement feel more realistic.

Straight lines are what browsers do best though. When we animate an element from one place to another using a translation the browser doesn’t take realism into account. It’ll always take the fastest and most efficient route.

This is where motion paths can come in handy. Motion paths give us the ability to move an element along a predefined path. They’re great for creating trajectories to animate along.

Use the toggle to see the paths.

See the Pen Alien Abduction- toggle by Cassie Evans (@cassie-codes) on CodePen.default

As well as being useful, they’re quite a lot of fun to play around with.

See the Pen Loop by Cassie Evans (@cassie-codes) on CodePen.default

So, how do you animate along a motion path?

I use GreenSock (GSAP) for most of my SVG animation and I made these demos using the newly released GSAP 3 and their MotionPathPlugin. So, if you want to skip to that bit, go ahead!

Otherwise let’s take a little journey through the past, present and future of motion path animation.

(Did someone say CSS motion paths?)

First, a little setup tip.

Make sure to keep the path and element you’re animating in the same SVG and co-ordinate space, otherwise things get a bit messy.

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1300 800">
  <path class="path" d="M1345.7 2.6l2.4-4.4"/>
  <g class="rocket">
    ...
  </g>
</svg>

SMIL

If you google “SVG motion path animation”, you’re going to get a lot of hits talking about SMIL.

SMIL was the original proposed method for SVG animation. It included the ability to animate along a path using the <animatemotion> element.

It’s nice and declarative and currently the browser support is surprisingly good, covering all modern browsers except Edge and Opera Mini.

But, and this is a big but, the future of SMIL is uncertain, and has been for a while.

It was deprecated by Chrome a few years back and although they’ve now suspended that deprecation, implementations still vary and there’s no clear path towards cross-browser support.

Although it’s fun to play around with, SMIL isn’t very future-proof, so I’m only going to touch on it.

In order to animate along a path with the animateMotion element, you reference the path you want to animate along using path="..." and define the element you want to animate using xlink:href="#...":

<animateMotion 
    path="M20.2..."
    xlink:href="#rocket" 
    dur="10s" 
    rotate="auto"
    
/>

See the Pen loop SMIL by Cassie Evans (@cassie-codes) on CodePen.default

With SMIL effectively out of the picture, browser vendors are now focused on supporting modern alternatives like the CSS Motion Path Module.

CSS Motion Path Module

Attention: As of the time of writing, the examples in this section are experimental and best viewed in Chrome.

You can check out which features your browser supports in the demo below.

See the Pen Browser Support – CSS motion path module by Cassie Evans (@cassie-codes) on CodePen.default

If you’ve got all green smiley faces, you’re good to go. But you may have a sad face for offset-anchor. This is because this property is currently still experimental. It’s behind a flag in Chrome, meaning it’s not turned on by default.

You can choose to enable it by going to this URL in Chrome:

chrome://flags/#enable-experimental-web-platform-features

and enabling experimental web platform features.

This module is joint work by the SVG and CSS working groups, so unlike SMIL, we’ll be able to use CSS motion paths to animate both, HTML and SVG DOM elements. I love a CSS-only solution, so although it’s not ready to use in production (yet), this is pretty exciting stuff.

The motion path module consists of five properties:

  • offset (shorthand property for the following)
  • offset-path
  • offset-distance
  • offset-anchor
  • offset-rotate

offset-path

offset-path defines the path that we can place our element on. There are a few proposed values but path() seems to be the only one supported right now.

.rocket {
	offset-path: path('M1345.7 2.6l2.4-4.4');
}

path() takes a path string with SVG coordinate syntax, which may look scary, but you don’t have to write this out. You can create a path in a graphics editing program and copy and paste it in.

offset-distance

offset-distance specifies the position along an offset-path for an element to be placed. This can be either in pixels or as a percentage of the length of the path.

See the Pen Rocket – CSS motion path – offset-distance by Cassie Evans (@cassie-codes) on CodePen.default

offset-anchor

By default the element’s top left corner will be aligned with the path, but we can change this with offset-anchor.
offset-anchor behaves a lot like transform-origin. In fact if set to auto, it’s given the same value as the element’s transform-origin, so we can optionally use transform-origin for the same results.

Like transform-origin it accepts a position with x and y values, either as a percentage or a keyword like bottom or left.

Have a play with the values:

See the Pen Rocket – CSS motion path – offset anchor by Cassie Evans (@cassie-codes) on CodePen.default

offset-rotate

offset-rotate defines the direction the element faces on the path.

By default it’s set to auto and will rotate with the path. You can pass in an optional second value in degrees in order to tweak the direction of this rotation.

See the Pen Rocket – CSS motion path – offset-rotate – auto deg by Cassie Evans (@cassie-codes) on CodePen.default

If you want your element to face the same direction throughout, and not rotate with the path, you can leave out auto and pass in a value in degrees.

See the Pen Rocket – CSS motion path – offset-rotate – deg by Cassie Evans (@cassie-codes) on CodePen.default

These properties were renamed from motion to offset since this spec was proposed. This is because alone, these properties just provide another way to set the position and rotation of absolutely positioned elements. But we can create motion by using them in conjunction with CSS animations and transitions.

.rocket {
  offset-path: path('M20.2...');
  offset-anchor: 50% 50%;
  offset-rotate: auto;
  /*   if offset anchor isn't supported we can use transform-origin instead */
  transform-origin: 50% 50%;
  animation: move 8s forwards linear;
  transform-box: fill-box;
}

@keyframes move {
  from {
    offset-distance: 0%;
  }
  to {
    offset-distance: 100%;
  }
}

See the Pen Rocket – CSS motion path by Cassie Evans (@cassie-codes) on CodePen.default

Attention: SVG transform-origin quirks.

In this demo, I’m using a relatively new CSS property, transform-box.

This is to avoid a browser quirk that’s caught me out a few times. When calculating transforms and transform-origin, some browsers use the element’s bounding box as the reference box and others use the SVG viewbox.

If you set the value to fill-box the objects bounding box is used as the reference box.

And if you set the value to view-box the nearest SVG viewbox is used as the reference box.

You can see what happens to the center of rotation when we change it here:

See the Pen Rocket – CSS motion path – transform-box by Cassie Evans (@cassie-codes) on CodePen.default

GreenSock Animation Platform (GSAP)

While we wait for the CSS solution to be more widely implemented we’re in a bit of a motion path limbo. Thankfully there’s some JavaScript animation libraries that are bridging this gap.

I usually use GreenSock for SVG animation for a few reasons.

There are some cross browser quirks with SVG, especially with how transforms are handled. The folks at GreenSock go above and beyond to handle these inconsistencies.

Animation can also be a bit fiddly, especially when it comes to fine-tuning timings and chaining different animations together. GreenSock gives you a lot of control and makes creating complex animations fun.

They also provide some plugins that are great for SVG animation like DrawSVG, MorphSVG and MotionPathPlugin.

They’re all free to experiment with on Codepen, but some of the plugins are behind a membership fee. MotionPathPlugin is one of the free ones, and part of the new GSAP 3 release.

MotionPathPlugin gives you the ability to turn an SVG path into a motion path, or specify your own path manually. You can then animate SVG or DOM elements along that path, even if those elements are in a completely different coordinate space.

Here’s a demo with the necessary libraries added to start you off.

In order to use a plugin we have to register it, like this:

gsap.registerPlugin(MotionPathPlugin);

Then we can start animating. This is what a tween using the simplified GSAP 3 syntax looks like:

gsap.to(".rocket", {
	motionPath: ...
	duration: 5,
});

The name ‘tween’ comes from the world of hand-drawn animation, too.

Tweening is the process of generating intermediate frames between two images to give the appearance that the first image evolves smoothly into the second image.

That’s pretty much what a GSAP tween does. You feed in the element you want to animate, the duration, and the properties you want to target and the tween will figure out the in-between states.

The motionPath attribute can be used shorthand, and passed a path:

gsap.to(".rocket", {
	motionPath: "#path",
	duration: 5,
});

Or, if we want more control over the settings we can pass it an object of options:

gsap.to(".rocket", {
	motionPath: {
		path: "#path",
		align: "#path",
		autoRotate: true,
	},
	duration: 5,
});

See the Pen Rocket – GSAP motion path by Cassie Evans (@cassie-codes) on CodePen.default

Here are some of the properties we can control.

path

This defines the motion path we’re animating along, we can reference a path that exists in the document by using a selector,

motionPath: {
	path: "#path",
}

a string that contains SVG path data,

motionPath: {
	path: 'M125.7 655a9.4 9.4...',
}

an object containing an array of x and y co-ordinates to move between,

motionPath: {
	path: [{x: 100, y: 100}, {x: 300, y: 20}]
}

or a variable referring to one of these options:

const myPath = 'M125.7 655a9.4 9.4...'

motionPath: {
	path: myPath,
}

align

We can use this to align the element to the path, or other elements in the document by passing in a selector:

motionPath: {
	path: "#path",
	align: "#path"
}

We can also align the element to itself if we want the animation to start from the element’s current position.

motionPath: {
	path: "#path",
	align: "self"
}

In the next demo, the purple rocket is aligned to self and the green rocket is aligned to the path.

align: “self” is like moving the path to the element, rather than the element to the path.

See the Pen Rocket – GSAP motion path – align by Cassie Evans (@cassie-codes) on CodePen.default

By default, the element’s top left corner will be the center of rotation and alignment. In order to align the element accurately on the path you’ll need to set the element’s center of rotation, like this:

gsap.set(".rocket", { 
	xPercent: -50,    
	yPercent: -50,    
	transformOrigin: "50% 50%"
});

autoRotate

This is how we get our element to rotate along with the curvature of the path:

motionPath: {
	path: "#path",
	align: "#path"
	autoRotate: true,
}

We can also provide a number value. This will rotate along with the path, but maintain that angle relative to the path.

motionPath: {
	path: "#path",
	align: "#path"
	autoRotate: 90,
}

start & end

These properties let us define where on the path the motion should begin and end.

By default, it starts at 0 and ends at 1, but we can provide any decimal number:

motionPath: {
	path: "#path",
	align: "#path"
	autoRotate: true,
	start: 0.25,
	end: 0.75,
}

If you want the element to go backwards along the path, you can provide negative numbers.

See the Pen Rocket – GSAP motion path – align by Cassie Evans (@cassie-codes) on CodePen.default

immediateRender

If your element is starting off at a different position in the document and you want it to align with the path you might notice a jump as it moves from its position to the path.

See the Pen Rocket – GSAP motion path – align by Cassie Evans (@cassie-codes) on CodePen.default

You can fix force it to render immediately upon instantiation by adding immediateRender:true to the tween.

// animate the rocket along the path
gsap.to(".rocket", {
    motionPath: {
        path: "#path",
        align: "#path",
        autoRotate: true,
    },
    duration: 5,
    ease: "power1.inOut",
    immediateRender: true,
});

MotionPathHelper

Another super cool feature of the GSAP 3 release is the MotionPathHelper.

It enables you to edit paths directly in the browser! I found this really helpful, as I’m always going back and forth between the browser and my graphics editor.

Give it a go in the demo below. When you’re done, click “copy motion path” to copy the SVG path data to your clipboard. Paste the new path data into the d=”” attribute in the SVG code to update your path.

There are instructions on how to edit the path in the GSAP docs.

See the Pen Rocket – GSAP motion path – helper by Cassie Evans (@cassie-codes) on CodePen.default

GreenSock is a ton of fun to play around with!

There are a bunch of other features and plugins that when paired with motion path animation can be used to create really cool effects.

In this demo, DrawSVG is progressively showing the text path as some staggered elements animate along the path using MotionPathPlugin:

See the Pen Squiggle text animation by Cassie Evans (@cassie-codes) on CodePen.default

If you’ve had fun with these examples and want to explore GreenSock some more, Christina Gorton has written The New Features of GSAP 3 providing a practical overview.

GreenSock also have a great getting started guide.

Happy animating!

Motion Paths – Past, Present and Future was written by Cassie Evans and published on Codrops.

Awesome Demos Roundup #11

$
0
0

Collective #571

$
0
0




C571_csslayout

CSS Layout

A fantastic collection of popular layouts and patterns made with CSS. Made by Phuoc Nguyen.

Check it out


C571_adventcode

Advent of Code 2019

Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

Check it out






C571_drum

DrumBot

Play real-time music with a machine learning drummer that drums based on your melody. Read more about it in this article.

Check it out







C571_ff71

Firefox 71: A year-end arrival

A plethora of new developer tools features including the web socket message inspector, console multi-line editor mode and more are coming in the new Firefox version.

Check it out






Collective #571 was written by Pedro Botelho and published on Codrops.

Collective #572

$
0
0





Browser Default Styles

A great tool that lets you search against any element for standardized and default styles from all major rendering engines (WebKit, Blink, Gecko, Trident).

Check it out




NanoNeuron

NanoNeuron is a set of seven simple JavaScript functions that will give you a feeling of how machines can actually “learn”.

Check it out




Waves

A beautiful demo of a wave by Louis Hoebregts that changes with the mouse move.

Check it out




AnonAddy

With AnonAddy you can create unlimited aliases for free and protect your email from spam using disposable addresses.

Check it out





Flynt

Flynt is a powerful, component-based WordPress starter theme for developers.

Check it out


Matestack

In case you haven’t heard about it: Matestack can help you rapidly create interactive UIs in pure Ruby.

Check it out

Collective #572 was written by Pedro Botelho and published on Codrops.

A Showcase of Creative Websites and How to Build Uniquely Special Ones for Your Clients

$
0
0

As the digital market continues to expand, it means more business for web designers and front-end developers. That’s of course good news, but there’s a catch. Today’s clients are becoming more and more sophisticated about what they expect from their online presence. Today’s web professionals are bound and determined to do their best to satisfy those clients and their demands.

You’re no different, but you’ll have to work harder (and/or smarter) to get your share of good assignments. You have to deliver websites that stand out from the rest and use tools that enable you to do just that – a tool like Be Theme for example.

Be Theme is the largest and most versatile WordPress theme on the market. It will do much if not most of the creative effort for you. It will do all the heavy lifting involved in putting together an attention-grabbing deliverable.

Let’s have a look at what Be Theme offers, along with some practical advice on how to build strikingly creative websites. With plenty of cool examples it will put you in good shape. This way you can create impressive, engaging, and visitor-converting creative websites.

Overuse “white” space? Not a bad idea.

Seemingly empty space has more going for it than you might think. In fact, it’s not all that easy to “overuse” the design element we refer to as white space. “More is better” is actually a pretty good rule of thumb to follow.

The Urban Village Project’s clean design enables the eye to focus on the main message and key elements:

UrbanVillage

Stylist illustrates how white space can enhance brand elements and bring focus to a design.

BeStylist

The Drive New York also shows how sophisticated use of white space can enhance the uniqueness of the brand:

Andstudio has a striking minimal look that employs white space as one of the main design elements:

AndStudio

Making white space a part of the brand is an excellent way to drive the message home.

BePrint 2 is an excellent example of how plenty of white space can be used in combination with distinctive typography:

Show visitors how your creativity benefits them

Don’t make the mistake of trying to impress your client on how creative you are. Web design isn’t about you. It’s about what you can do for your visitors or those of your clients. A little creativity can be highly effective when it helps visitors imagine themselves actually using your product or service.

BeYoga 3 offers a clear message and shows what the service can help its customers to achieve:

Travelshift engages visitors in an experience they want to be part of:

In this BeBlogger 3 example the mood is set for adventure and thrilling storytelling:

BeExtreme 2 provides an excellent foundation for a travel agency, a travel blogger’s site, or a destination site. It can also be a source of inspiration that encourages viewers to get off the sofa and see more of the world.

Use sharp, crystal-clear photos and creative illustrations

Crystal clear photos and illustrations are important in different ways:

  1. The images themselves contribute to the message you want to convey
  2. They are more likely to be remembered as a brand element
  3. They reflect the professional care and craftsmanship involved in creating the design of the website.

Mechanic 4’s distinctive illustration and brand colors will make it very likely be remembered and it shows how an everyday service can be elevated with an unexpected design:

MetaMusic’s landing page design is composed of a noteworthy line illustration that gives a distinct character to the web presence and works in harmony with the other elements, like the typography:

BeCode 2 is another example by Be Theme that conveys how creative illustrations in combination with interesting shapes can bring a design to the next level and help provide a fresh perception of the service:

BeCafe 2 shows how using spellbinding “signature” images can make a web presence special and give it the unique touch that every brand seeks for:

Papas Nativas, a dedicated page by Emergence Magazine, has a wonderful design with color filtered images that set the mood for the story:

The portfolio of Daphné Launay welcomes the visitor with one of her remarkable works. The design is built around the exquisite mood she sets with her subjects:

Sheep Inc’s eminent design and layout work in combination with a unique image style that won’t be forgotten:

Select a spellbinding color palette

The color palette you select and work from can make the difference between a website that isn’t all that different from most others to one that gets a ton of attention. You don’t have to be a certified interior decorator or an award winning artist to get colors right. Following a couple of simple rules is all you need to do.

  • Choose colors that will attract immediate attention.
  • Support the brand as well as the message you’re trying to convey with your color selection.

BeLanguage 3 uses bright and bold, attention-grabbing colors to suggest how learning a new language can be both fun and rewarding.

BeScienceCentre pre-built website is an example of using a color palette that grabs your attention and make you want to browse more:

Shake proficiently uses their brand color to create a memorable experience of their website. The fade-out animation to white allows them to continue a minimal flow without having to compromise their impactful initial statement:

BePolyglot’s pleasing color theme positively enhances the idea behind the service and creates an inviting yet professional mood:

BeApp 4 is a great example of how colors and motif can be carried through the design:

BeProductions has a very minimal, yet impactful color approach. It cleverly uses the main brand color to amplify key elements of the design:

Make your CTAs easy to find and impossible to ignore

It can take more than a little time and effort to create a website that draws visitors in and gets them engaged, but if they have to hunt for a CTA button, or the button isn’t well integrated into the website’s flow, that time and effort can go for naught.

Put in another way: those buttons should practically beg to be clicked on.

Intact has an excellent way of highlighting the important actions on the site that goes well with the entire color theme and style of the website:

BeITService 2’s CTA buttons are easy to locate. They acts as a gate that invites you to explore further to find the information or explore their products:.

BeProduct 4’s CTAs complement the brand’s color theme, creating a clever contrast that attracts the eye and highlights the goal of exploring the service first, before purchasing it:

Starface World’s quirky color theme incorporates few but catchy colors that make it impossible to miss the most important action the site:

CTA buttons don’t necessarily have to stand out from every other element on a page. You can have them match other design elements and still attract attention if they are placed judiciously. BeBikeRental is a good example:

Building Creative Websites: Summing up

The secret to your success requires imagination and creativity on your part. The practical tips presented here provide you with a framework to work from. They are straightforward and not particularly difficult to execute. If you keep them in mind and have quality website tools to work with you should have little trouble putting them to good use.

You also want a tool that won’t bog you down or place unnecessary limitations on what you want to create. You want one that allows you to manage a quick pace as business begins to increase – which it will.

With Be Theme’s extensive gallery of nearly 500 creative websites you will get boundless ideas for your next project. Every website is based on the principles mentioned before and being highly customizable, you can focus on your client’s needs. You’ll be surprised with the speed an easy you can craft a unique website with a powerful building tool like Be Theme.

A Showcase of Creative Websites and How to Build Uniquely Special Ones for Your Clients was written by Bogdan Sandu and published on Codrops.


Building a Physics-based 3D Menu with Cannon.js and Three.js

$
0
0

Yeah, shaders are good but have you ever heard of physics?

Nowadays, modern browsers are able to run an entire game in 2D or 3D. It means we can push the boundaries of modern web experiences to a more engaging level. The recent portfolio of Bruno Simon, in which you can play a toy car, is the perfect example of that new kind of playful experience. He used Cannon.js and Three.js but there are other physics libraries like Ammo.js or Oimo.js for 3D rendering, or Matter.js for 2D. 

In this tutorial, we’ll see how to use Cannon.js as a physics engine and render it with Three.js in a list of elements within the DOM. I’ll assume you are comfortable with Three.js and know how to set up a complete scene.

Prepare the DOM

This part is optional but I like to manage my JS with HTML or CSS. We just need the list of elements in our nav:

<nav class="mainNav | visually-hidden">
    <ul>
        <li><a href="#">Watermelon</a></li>
        <li><a href="#">Banana</a></li>
        <li><a href="#">Strawberry</a></li>
    </ul>
</nav>
<canvas id="stage"></canvas>

Prepare the scene

Let’s have a look at the important bits. In my Class, I call a method “setup” to init all my components. The other method we need to check is “setCamera” in which I use an Orthographic Camera with a distance of 15. The distance is important because all of our variables we’ll use further are based on this scale. You don’t want to work with too big numbers in order to keep it simple.

// Scene.js

import Menu from "./Menu";

// ...

export default class Scene {
    // ...
    setup() {
        // Set Three components
        this.scene = new THREE.Scene()
        this.scene.fog = new THREE.Fog(0x202533, -1, 100)

        this.clock = new THREE.Clock()

        // Set options of our scene
        this.setCamera()
        this.setLights()
        this.setRender()

        this.addObjects()

        this.renderer.setAnimationLoop(() => { this.draw() })

    }

    setCamera() {
        const aspect = window.innerWidth / window.innerHeight
        const distance = 15

        this.camera = new THREE.OrthographicCamera(-distance * aspect, distance * aspect, distance, -distance, -1, 100)

        this.camera.position.set(-10, 10, 10)
        this.camera.lookAt(new THREE.Vector3())
    }

    draw() {
        this.renderer.render(this.scene, this.camera)
    }

    addObjects() {
        this.menu = new Menu(this.scene)
    }

    // ...
}

Create the visible menu

Basically, we will parse all our elements in our menu, create a group in which we will initiate a new mesh for each letter at the origin position. As we’ll see later, we’ll manage the position and rotation of our mesh based on its rigid body.

If you don’t know how creating text in Three.js works, I encourage you to read the documentation. Moreover, if you want to use a custom font, you should check out facetype.js.

In my case, I’m loading a Typeface JSON file.

// Menu.js

export default class Menu {
  constructor(scene) {
    // DOM elements
    this.$navItems = document.querySelectorAll(".mainNav a");

    // Three components
    this.scene = scene;
    this.loader = new THREE.FontLoader();

    // Constants
    this.words = [];

    this.loader.load(fontURL, f => {
      this.setup(f);
    });
  }

  setup(f) {

    // These options give us a more candy-ish render on the font
    const fontOption = {
      font: f,
      size: 3,
      height: 0.4,
      curveSegments: 24,
      bevelEnabled: true,
      bevelThickness: 0.9,
      bevelSize: 0.3,
      bevelOffset: 0,
      bevelSegments: 10
    };


    // For each element in the menu...
    Array.from(this.$navItems)
      .reverse()
      .forEach(($item, i) => {
        // ... get the text ...
        const { innerText } = $item;

        const words = new THREE.Group();

        // ... and parse each letter to generate a mesh
        Array.from(innerText).forEach((letter, j) => {
          const material = new THREE.MeshPhongMaterial({ color: 0x97df5e });
          const geometry = new THREE.TextBufferGeometry(letter, fontOption);

          const mesh = new THREE.Mesh(geometry, material);
          words.add(mesh);
        });

        this.words.push(words);
        this.scene.add(words);
      });
  }
}

Building a physical world

Cannon.js uses the loop of render of Three.js to calculate the forces that rigid bodies sustain between each frame. We decide to set a global force you probably already know: gravity.

// Scene.js

import C from 'cannon'

// …

setup() {
    // Init Physics world
    this.world = new C.World()
    this.world.gravity.set(0, -50, 0)

    // … 
}

// … 

addObjects() {
    // We now need to pass the world of physic as an argument
    this.menu = new Menu(this.scene, this.world);
}


draw() {
    // Create our method to update the physic
    this.updatePhysics();

    this.renderer.render(this.scene, this.camera);
}

updatePhysics() {
    // We need this to synchronize three meshes and Cannon.js rigid bodies
    this.menu.update()

    // As simple as that!
    this.world.step(1 / 60);
}

// …

As you see, we set the gravity of -50 on the Y-axis. It means that all our bodies will undergo a force of -50 each frame to the infinite until they encounter another body or the floor. Notice that if we change the scale of our elements or the distance number of our camera, we need to also adjust the gravity number.

Rigid bodies

Rigid bodies are simpler invisible shapes used to represent our meshes in the physical world. Usually, their meshes are way more elementary than our rendered mesh because the fewer vertices we have to calculate, the faster it is.

Note that “soft bodies” also exist. It represents all the bodies that undergo a distortion of their mesh because of other forces (like other objects pushing them or simply gravity affecting them).

For our purpose, we will create a simple box for each letter of their size, and place them in the correct position. 

There are a lot of things to update in Menu.js so let’s look at every part.

First, we need two more constants:

// Menu.js

// It will calculate the Y offset between each element.
const margin = 6;
// And this constant is to keep the same total mass on each word. We don't want a small word to be lighter than the others. 
const totalMass = 1;

The totalMass will involve the friction on the ground and the force we’ll apply later. At this moment, “1” is enough.

// …

export default class Menu {
    constructor(scene, world) {
        // … 
        this.world = world
        this.offset = this.$navItems.length * margin * 0.5;
    }


  setup(f) {
        // … 
        Array.from(this.$navItems).reverse().forEach(($item, i) => {
            // … 
            words.letterOff = 0;

            Array.from(innerText).forEach((letter, j) => {
                const material = new THREE.MeshPhongMaterial({ color: 0x97df5e });
                const geometry = new THREE.TextBufferGeometry(letter, fontOption);

                geometry.computeBoundingBox();
                geometry.computeBoundingSphere();

                const mesh = new THREE.Mesh(geometry, material);
                // Get size of our entire mesh
                mesh.size = mesh.geometry.boundingBox.getSize(new THREE.Vector3());

                // We'll use this accumulator to get the offset of each letter. Notice that this is not perfect because each character of each font has specific kerning.
                words.letterOff += mesh.size.x;

                // Create the shape of our letter
                // Note that we need to scale down our geometry because of Box's Cannon.js class setup
                const box = new C.Box(new C.Vec3().copy(mesh.size).scale(0.5));

                // Attach the body directly to the mesh
                mesh.body = new C.Body({
                    // We divide the totalmass by the length of the string to have a common weight for each words.
                    mass: totalMass / innerText.length,
                    position: new C.Vec3(words.letterOff, this.getOffsetY(i), 0)
                });

                // Add the shape to the body and offset it to match the center of our mesh
                const { center } = mesh.geometry.boundingSphere;
                mesh.body.addShape(box, new C.Vec3(center.x, center.y, center.z));
                // Add the body to our world
                this.world.addBody(mesh.body);
                words.add(mesh);
            });

            // Recenter each body based on the whole string.
            words.children.forEach(letter => {
                letter.body.position.x -= letter.size.x + words.letterOff * 0.5;
            });

            // Same as before
            this.words.push(words);
            this.scene.add(words);
        })
    }

    // Function that return the exact offset to center our menu in the scene
    getOffsetY(i) {
        return (this.$navItems.length - i - 1) * margin - this.offset;
    }

    // ...

}

You should have your menu centered in your scene, falling to the infinite and beyond. Let’s create the ground of each element of our menu in our words loop:

// …

words.ground = new C.Body({
    mass: 0,
    shape: new C.Box(new C.Vec3(50, 0.1, 50)),
    position: new C.Vec3(0, i * margin - this.offset, 0)
});

this.world.addBody(words.ground);

// … 

A shape called “Plane” exists in Cannon. It represents a mathematical plane, facing up the Z-axis and usually used as ground. Unfortunately, it doesn’t work with superposed grounds. Using a box is probably the easiest way to make the ground in this case.

Interaction with the physical world

We have an entire world of physics beneath our fingers but how to interact with it?

We calculate the mouse position and on each click, cast a ray (raycaster) towards our camera. It will return the objects the ray is passing through with more information, like the contact point but also the face and its normal.

Normals are perpendicular vectors of each vertex and faces of a mesh:

We will get the clicked face, get the normal and reverse and multiply by a constant we have defined. Finally, we’ll apply this vector to our clicked body to give an impulse.

To make it easier to understand and read, we will pass a 3rd argument to our menu, the camera.

// Scene.js
this.menu = new Menu(this.scene, this.world, this.camera);
// Menu.js
// A new constant for our global force on click
const force = 25;

constructor(scene, world, camera) {
    this.camera = camera;

    this.mouse = new THREE.Vector2();
    this.raycaster = new THREE.Raycaster();

    // Bind events
    document.addEventListener("click", () => { this.onClick(); });
    window.addEventListener("mousemove", e => { this.onMouseMove(e); });
}

onMouseMove(event) {
    // We set the normalized coordinate of the mouse
    this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
    this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
}

onClick() {
    // update the picking ray with the camera and mouse position
    this.raycaster.setFromCamera(this.mouse, this.camera);

    // calculate objects intersecting the picking ray
    // It will return an array with intersecting objects
    const intersects = this.raycaster.intersectObjects(
        this.scene.children,
        true
    );

    if (intersects.length > 0) {
        const obj = intersects[0];
        const { object, face } = obj;

        if (!object.isMesh) return;

        const impulse = new THREE.Vector3()
        .copy(face.normal)
        .negate()
        .multiplyScalar(force);

        this.words.forEach((word, i) => {
            word.children.forEach(letter => {
                const { body } = letter;

                if (letter !== object) return;

                // We apply the vector 'impulse' on the base of our body
                body.applyLocalImpulse(impulse, new C.Vec3());
            });
        });
    }
}

Constraints and connections

As you can see at the moment, you can punch each letter like the superman or superwoman you are. But even if this is already looking cool, we can still do better by connecting every letter between them. In Cannon, it’s called constraints. This is probably the most satisfying thing with using physics.

// Menu.js

setup() {
    // At the end of this method
    this.setConstraints()
}

setConstraints() {
    this.words.forEach(word => {
        for (let i = 0; i < word.children.length; i++) {
        // We get the current letter and the next letter (if it's not the penultimate)
        const letter = word.children[i];
        const nextLetter =
            i === word.children.length - 1 ? null : word.children[i + 1];

        if (!nextLetter) continue;

        // I choosed ConeTwistConstraint because it's more rigid that other constraints and it goes well for my purpose
        const c = new C.ConeTwistConstraint(letter.body, nextLetter.body, {
            pivotA: new C.Vec3(letter.size.x, 0, 0),
            pivotB: new C.Vec3(0, 0, 0)
        });

        // Optionnal but it gives us a more realistic render in my opinion
        c.collideConnected = true;

        this.world.addConstraint(c);
        }
    });
}

To correctly explain how these pivots work, check out the following figure:

(letter.mesh.size, 0, 0) is the origin of the next letter.

Remove the sandpaper on the floor

As you have probably noticed, it seems like our ground is made of sandpaper. That’s something we can change. In Cannon, there are materials just like in Three. Except that these materials are physic-based. Basically, in a material, you can set the friction and the restitution of a material. Are our letters made of rock, or rubber? Or are they maybe slippy? 

Moreover, we can define the contact material. It means that if I want my letters to be slippy between each other but bouncy with the ground, I could do that. In our case, we want a letter to slip when we punch it.

// In the beginning of my setup method I declare these
const groundMat = new C.Material();
const letterMat = new C.Material();

const contactMaterial = new C.ContactMaterial(groundMat, letterMat, {
    friction: 0.01
});

this.world.addContactMaterial(contactMaterial);

Then we set the materials to their respective bodies:

// ...
words.ground = new C.Body({
    mass: 0,
    shape: new C.Box(new C.Vec3(50, 0.1, 50)),
    position: new C.Vec3(0, i * margin - this.offset, 0),
    material: groundMat
});
// ...
mesh.body = new C.Body({
    mass: totalMass / innerText.length,
    position: new C.Vec3(words.letterOff, this.getOffsetY(i), 0),
    material: letterMat
});
// ...

Tada! You can push it like the Rocky you are.

Final words

I hope you have enjoyed this tutorial! I have the feeling that we’ve reached the point where we can push interfaces to behave more realistically and be more playful and enjoyable. Today we’ve explored a physics-powered menu that reacts to forces using Cannon.js and Three.js. We can also think of other use cases, like images that behave like cloth and get distorted by a click or similar.

Cannon.js is very powerful. I encourage you to check out all the examples, share, comment and give some love and don’t forget to check out all the demos!

Building a Physics-based 3D Menu with Cannon.js and Three.js was written by Arno Di Nunzio and published on Codrops.

Collective #573

$
0
0








Collective image

No to Chrome

It’s important to raise awareness about the surveillance machine Google has become and “No to Chrome” is an attempt to do so by urging to seek out a better web browser as a simple first step to oppose Google’s intrusive disregard for our rights.

Check it out





Collective image

React View

React View is a set of tools that aspires to close the gap between users, developers and designers of component libraries.

Check it out










Collective #573 was written by Pedro Botelho and published on Codrops.

Collective #574

$
0
0






Collective item image

Happy Hues

Happy Hues is a color palette inspiration site that acts as a real world example of how color palettes can be used in a project. By Mackenzie Child.

Check it out




Collective item image

Raw WebGL

In case you missed it: A great guide that will teach you key data structures and types that are needed to draw in WebGL.

Read it













Collective #574 was written by Pedro Botelho and published on Codrops.

Scroll, Refraction and Shader Effects in Three.js and React

$
0
0

In this tutorial I will show you how to take a couple of established techniques (like tying things to the scroll-offset), and cast them into re-usable components. Composition will be our primary focus.

In this tutorial we will:

  • build a declarative scroll rig
  • mix HTML and canvas
  • handle async assets and loading screens via React.Suspense
  • add shader effects and tie them to scroll
  • and as a bonus: add an instanced variant of Jesper Vos multiside refraction shader

Setting up

We are using React, hooks, Three.js and react-three-fiber. The latter is a renderer for Three.js which allows us to declare the scene graph by breaking up tasks into self-contained components. However, you still need to know a bit of Three.js. All there is to know about react-three-fiber you can find on the GitHub repo’s readme. Check out the tutorial on alligator.io, which goes into the why and how.

We don’t emulate a scroll bar, which would take away browser semantics. A real scroll-area in front of the canvas with a set height and a listener is all we need.

I decided to divide the content into:

  • virtual content sections
  • and pages, each 100vh long, this defines how long the scroll area is
function App() {
  const scrollArea = useRef()
  const onScroll = e => (state.top.current = e.target.scrollTop)
  useEffect(() => void onScroll({ target: scrollArea.current }), [])
  return (
    <>
      <Canvas orthographic>{/* Contents ... */}</Canvas>
      <div ref={scrollArea} onScroll={onScroll}>
        <div style={{ height: `${state.pages * 100}vh` }} />
      </div>

scrollTop is written into a reference because it will be picked up by the render-loop, which is carrying out the animations. Re-rendering for often occurring state doesn’t make sense.

A first-run effect synchronizes the local scrollTop with the actual one, which may not be zero.

Building a declarative scroll rig

There are many ways to go about it, but generally it would be nice if we could distribute content across the number of sections in a declarative way while the number of pages defines how long we have to scroll. Each content-block should have:

  • an offset, which is the section index, given 3 sections, 0 means start, 2 means end, 1 means in between
  • a factor, which gets added to the offset position and subtracted using scrollTop, it will control the blocks speed and direction

Blocks should also be nestable, so that sub-blocks know their parents’ offset and can scroll along.

const offsetContext = createContext(0)

function Block({ children, offset, factor, ...props }) {
  const ref = useRef()
  // Fetch parent offset and the height of a single section
  const { offset: parentOffset, sectionHeight } = useBlock()
  offset = offset !== undefined ? offset : parentOffset
  // Runs every frame and lerps the inner block into its place
  useFrame(() => {
    const curY = ref.current.position.y
    const curTop = state.top.current
    ref.current.position.y = lerp(curY, (curTop / state.zoom) * factor, 0.1)
  })
  return (
    <offsetContext.Provider value={offset}>
      <group {...props} position={[0, -sectionHeight * offset * factor, 0]}>
        <group ref={ref}>{children}</group>
      </group>
    </offsetContext.Provider>
  )
}

This is a block-component. Above all, it wraps the offset that it is given into a context provider so that nested blocks and components can read it out. Without an offset it falls back to the parent offset.

It defines two groups. The first is for the target position, which is the height of one section multiplied by the offset and the factor. The second, inner group is animated and cancels out the factor. When the user scrolls to the given section offset, the block will be centered.

We use that along with a custom hook which allows any component to access block-specific data. This is how any component gets to react to scroll.

function useBlock() {
  const { viewport } = useThree()
  const offset = useContext(offsetContext)
  const canvasWidth = viewport.width / zoom
  const canvasHeight = viewport.height / zoom
  const sectionHeight = canvasHeight * ((pages - 1) / (sections - 1))
  // ...
  return { offset, canvasWidth, canvasHeight, sectionHeight }
}

We can now compose and nest blocks conveniently:

<Block offset={2} factor={1.5}>
  <Content>
    <Block factor={-0.5}>
      <SubContent />
    </Block>
  </Content>
</Block>

Anything can read from block-data and react to it (like that spinning cross):

function Cross() {
  const ref = useRef()
  const { viewportHeight } = useBlock()
  useFrame(() => {
    const curTop = state.top.current
    const nextY = (curTop / ((state.pages - 1) * viewportHeight)) * Math.PI
    ref.current.rotation.z = lerp(ref.current.rotation.z, nextY, 0.1)
  })
  return (
    <group ref={ref}>

Mixing HTML and canvas, and dealing with assets

Keeping HTML in sync with the 3D world

We want to keep layout and text-related things in the DOM. However, keeping it in sync is a bit of a bummer in Three.js, messing with createElement and camera calculations is no fun.

In three-fiber all you need is the <Dom /> helper (@beta atm). Throw this into the canvas and add declarative HTML. This is all it takes for it to move along with its parents’ world-matrix.

<group position={[10, 0, 0]}>
  <Dom><h1>hello</h1></Dom>
</group>

Accessibility

If we strictly divide between layout and visuals, supporting a11y is possible. Dom elements can be behind the canvas (via the prepend prop), or in front of it. Make sure to place them in front if you need them to be accessible.

Responsiveness, media-queries, etc.

While the DOM fragments can rely on CSS, their positioning overall relies on the scene graph. Canvas elements on the other hand know nothing of the sort, so making it all work on smaller screens can be a bit of a challenge.

Fortunately, three-fiber has auto-resize inbuilt. Any component requesting size data will be automatically informed of changes.

You get:

  • viewport, the size of the canvas in its own units, must be divided by camera.zoom for orthographic cameras
  • size, the size of the screen in pixels
const { viewport, size } = useThree()

Most of the relevant calculations for margins, maxWidth and so on have been made in useBlock.

Handling async assets and loading screens via React.Suspense

Concerning assets, Reacts Suspense allows us to control loading and caching, when components should show up, in what order, fallbacks, and how errors are handled. It makes something like a loading screen, or a start-up animation almost too easy.

The following will suspend all contents until each and every component, even nested ones, have their async data ready. Meanwhile it will show a fallback. When everything is there, the <Startup /> component will render along with everything else.

<Suspense fallback={<Fallback />}>
  <AsyncContent />
  <Startup />
</Suspense>

In three-fiber you can suspend a component with the useLoader hook, which takes any Three.js loader, then loads (and caches) assets with it.

function Image() {
  const texture = useLoader(THREE.TextureLoader, "/texture.png")
  // It will only get here if the texture has been loaded
  return (
    <mesh>
      <meshBasicMaterial attach="material" map={texture} />

Adding shader effects and tying them to scroll

The custom shader in this demo is a Frankenstein based on the Three.js MeshBasicMaterial, plus:

The relevant portion of code in which we feed the shader block-specific scroll data is this one:

material.current.scale =
  lerp(material.current.scale, offsetFactor - top / ((pages - 1) * viewportHeight), 0.1)
material.current.shift =
  lerp(material.current.shift, (top - last) / 150, 0.1)

Adding Diamonds

The technique is explained in full detail in the article Real-time Multiside Refraction in Three Steps by Jesper Vos. I placed Jesper’s code into a re-usable component, so that it can be mounted and unmounted, taking care of all the render logic. I also changed the shader slightly to enable instancing, which now allows us to draw dozens of these onto the screen without hitting a performance snag anytime soon.

The component reads out block-data like everything else. The diamonds are put into place according to the scroll offset by distributing the instanced meshes. This is a relatively new feature in Three.js.

Wrapping up

This tutorial may give you a general idea, but there are many things that are possible beyond the generic parallax; you can tie anything to scroll. Above all, being able to compose and re-use components goes a long way and is so much easier than dealing with a soup of code fragments whose implicit contracts span the codebase.

Scroll, Refraction and Shader Effects in Three.js and React was written by Paul Henschel and published on Codrops.

Case Study: Portfolio of Bruno Arizio

$
0
0

Introduction

Bruno Arizio, Designer — @brunoarizio

Since I first became aware of the energy in this community, I felt the urge to be more engaged in this ‘digital avant-garde landscape’ that is being cultivated by the amazing people behind Codrops, Awwwards, CSSDA, The FWA, Webby Awards, etc. That energy has propelled me to set up this new portfolio, which acted as a way of putting my feet into the water and getting used to the temperature.

I see this community being responsible for pushing the limits of what is possible on the web, fostering the right discussions and empowering the role of creative developers and creative designers across the world.

With this in mind, it’s difficult not to think of the great art movements of the past and their role in mediating change. You can easily draw a parallel between this digital community and the Impressionists artists in the last century, or as well the Bauhaus movement leading our society into modernism a few decades ago. What these periods have in common is that they’re pushing the boundaries of what is possible, of what is the new standard, doing so through relentless experimentation. The result of that is the world we live in, the products we interact with, and the buildings we inhabit.

The websites that are awarded today, are so because they are innovating in some aspects, and those innovations eventually become a new standard. We can see that in the apps used by millions of people, in consumer websites, and so on. That is the impact that we make.

I’m not saying that a new interaction featured on a new portfolio launched last week is going to be in the hands of millions of people across the globe in the following week, although constantly pushing these interactions to its limits will scale it and eventually make these things adopted as new standards. This is the kind of responsibility that is in our hands.

Open Source

We decided to be transparent and take a step forward in making this entire project open source so people can learn how to make the things we created. We are both interested in supporting the community, so feel free to ask us questions on Twitter or Instagram (@brunoarizio and @lhbzr), we welcome you to do so!

The repository is available on GitHub.

Design Process

With the portfolio, we took a meticulous approach to motion and collaborated to devise deliberate interactions that have a ‘realness’ to it, especially on the main page.

The mix of the bending animation with the distortion effect was central to making the website ‘tactile’. It is meant to feel good when you shuffle through the projects, and since it was published we received a lot of messages from people saying how addictive the navigation is.

A lot of my new ideas come from experimenting with shaders and filters in After Effects, and just after I find what I’m looking for — the ‘soul’ of the project — I start to add the ‘grid layer’ and begin to structure the typography and other elements.

In this project, before jumping to Sketch, I started working with a variety of motion concepts in AE, and that’s when the version with the convection bending came in and we decided to take it forward. So we can pretty much say that the project was born from motion, not from a layout in this matter. After the main idea was solid enough, I took it to Sketch, designed a simple grid and applied the typography.

Collaboration

Working in collaboration with Luis was so productive. This is the second (of many to come) projects working together and I can safely say that we had a strong connection from start to finish, and that was absolutely important for the final results. It wasn’t a case in which the designer creates the layouts and hands them over to a developer and period. This was a nuanced relationship of constant feedback. We collaborated daily from idea to production, and it was fantastic how dev and design had this keen eye for perfectionism.

From layout to code we were constantly fine-tuning every aspect: from the cursor kinetics to making overhaul layout changes and finding the right tone for the easing curves and the noise mapping on the main page.

When you design a portfolio, especially your own, it feels daunting since you are free to do whatever you want. But the consequence is that this will dictate how people will see your work, and what work you will be doing shortly after. So making the right decisions deliberately and predicting its impact is mandatory for success.

Technical Breakdown

Luis Henrique Bizarro, Creative Developer — @lhbzr

Motion Reference

This was the video of the motion reference that Bruno shared with me when he introduced me his ideas for his portfolio. I think one of the most important things when starting a project like this with the idea of implementing a lot of different animations, is to create a little prototype in After Effects to drive the developer to achieve similar results using code.

The Tech Stack

The portfolio was developed using:

That’s my favorite stack to work with right now; it gives me a lot of freedom to focus on animations and interactions instead of having to follow guidelines of a specific framework.

In this particular project, most of the code was written from scratch using ECMAScript 2015+ features like Classes, Modules, and Promises to handle the route transitions and other things in the application.

In this case study, we’ll be focusing on the WebGL implementation, since it’s the core animation of the website and the most interesting thing to talk about.

1. How to measure things in Three.js

This specific subject was already covered in other articles of Codrops, but in case you’ve never heard of it before, when you’re working with Three.js, you’ll need to make some calculations in order to have values that represent the correct sizes of the viewport of your browser.

In my last projects, I’ve been using this Gist by Florian Morel, which is basically a calculation that uses your camera field-of-view to return the values for the height and width of the Three.js environment.

// createCamera()
const fov = THREEMath.degToRad(this.camera.fov);
const height = 2 * Math.tan(fov / 2) * this.camera.position.z;
const width = height * this.camera.aspect;
        
this.environment = {
  height,
  width
};

// createPlane()
const { height, width } = this.environment;

this.plane = new PlaneBufferGeometry(width * 0.75, height * 0.75, 100, 50);

I usually store these two variables in the wrapper class of my applications, this way we just need to pass it to the constructor of other elements that will use it.

In the embed below, you have a very simple implementation of a PlaneBufferGeometry that covers 75% of the height and width of your viewport using this solution.

2. Uploading textures to the GPU and using them in Three.js

In order to avoid the textures to be processed in runtime while the user is navigating through the website, I consider a very good practice to upload all images to the GPU immediately when they’re ready. On Bruno’s portfolio, this process happens during the preloading of the website. (Kudos to Fabio Azevedo for introducing me this concept a long time ago in previous projects.)

Another two good additions, in case you don’t want Three.js to resize and process the images you’re going to use as textures, are disabling mipmaps and change how the texture is sampled by changing the generateMipmaps and minFilter attributes.

this.loader = new TextureLoader();

this.loader.load(image, texture => {
  texture.generateMipmaps = false;
  texture.minFilter = LinearFilter;
  texture.needsUpdate = true;

  this.renderer.initTexture(texture, 0);
});

The method .initTexture() was introduced back in the newest versions of Three.js in the WebGLRenderer class, so make sure to update to the latest version of the library to be able to use this feature.

But my texture is looking stretched! The default behavior of Three.js map attribute from MeshBasicMaterial is to make your image fit into the PlaneBufferGeometry. This happens because of the way the library handles 3D models. But in order to keep the original aspect ratio of your image, you’ll need to do some calculations as well.

There’s a lot of different solutions out there that don’t use GLSL shaders, but in our case we’ll also need them to implement our animations. So let’s implement the aspect ratio calculations in our fragment shader that will be created for the ShaderMaterial class.

So, all you need to do is pass your Texture to your ShaderMaterial via the uniforms attribute. In the fragment shader, you’ll be able to use all variables passed via the uniforms attribute.

In Three.js Uniform documentation you have a good reference of what happens internally when you pass the values. For example, if you pass a Vector2, you’ll be able to use a vec2 inside your shaders.

We need two vec2 variables to do the aspect ratio calculations: the image resolution and the resolution of the renderer. After passing them to the fragment shader, we just need to implement our calculations.

this.material = new ShaderMaterial({
  uniforms: {
    image: {
      value: texture
    },
    imageResolution: {
      value: new Vector2(texture.image.width, texture.image.height)
    },
    resolution: {
      type: "v2",
      value: new Vector2(window.innerWidth, window.innerHeight)
    }
  },
  fragmentShader: `
    uniform sampler2D image;
    uniform vec2 imageResolution;
    uniform vec2 resolution;

    varying vec2 vUv;

    void main() {
        vec2 ratio = vec2(
          min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
          min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
        );

        vec2 uv = vec2(
          vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
          vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
        );

        gl_FragColor = vec4(texture2D(image, uv).xyz, 1.0);
    }
  `,
  vertexShader: `
    varying vec2 vUv;

    void main() {
        vUv = uv;

        vec3 newPosition = position;

        gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
    }
  `
});

In this snippet we’re using template strings to represent the code of our shaders only to keep it simple when using CodeSandbox, but I highly recommend using glslify to split your shaders into multiple files to keep your code more organized in a more robust development environment.

We’re all good now with the images! Our images are preserving their original aspect ratio and we also have control over how much space they’ll use in our viewport.

3. How to implement infinite scrolling

Infinite scrolling can be something very challenging, but in a Three.js environment the implementation is smoother than it’d be without WebGL by using CSS transforms and HTML elements, because you don’t need to worry about storing the original position of the elements and calculate their distance to avoid browser repaints.

Overall, a simple logic for the infinite scrolling should follow these two basic rules:

  • If you’re scrolling down, your elements move up — when your first element isn’t on the screen anymore, you should move it to the end of the list.
  • If you’re scrolling up, your elements move to down — when your last element isn’t on the screen anymore, you should move it to the start of the list.

Sounds reasonable right? So, first we need to detect in which direction the user is scrolling.

this.position.current += (this.scroll.values.target - this.position.current) * 0.1;

if (this.position.current < this.position.previous) {
  this.direction = "up";
} else if (this.position.current > this.position.previous) {
  this.direction = "down";
} else {
  this.direction = "none";
}

this.position.previous = this.position.current;

The variable this.scroll.values.target is responsible for defining to which scroll position the user wants to go. Then the variable this.position.current represents the current position of your scroll, it goes smoothly to the value of the target with the * 0.1 multiplication.

After detecting the direction the user is scrolling towards, we just store the current position to the this.position.previous variable, this way we’ll also have the right direction value inside the requestAnimationFrame.

Now we need to implement the checking method to make our items have the expected behavior based on the direction of the scroll and their position. In order to do so, you need to implement a method like this one below:

check() {
  const { height } = this.environment;
  const heightTotal = height * this.covers.length;

  if (this.position.current < this.position.previous) {
    this.direction = "up";
  } else if (this.position.current > this.position.previous) {
    this.direction = "down";
  } else {
    this.direction = "none";
  }

  this.projects.forEach(child =>; {
    child.isAbove = child.position.y > height;
    child.isBelow = child.position.y < -height;

    if (this.direction === "down" && child.isAbove) {
      const position = child.location - heightTotal;

      child.isAbove = false;
      child.isBelow = true;

      child.location = position;
    }

    if (this.direction === "up" && child.isBelow) {
      const position = child.location + heightTotal;

      child.isAbove = true;
      child.isBelow = false;

      child.location = position;
    }

    child.update(this.position.current);
  });
}

Now our logic for the infinite scroll is finally finished! Drag and drop the embed below to see it working.

You can also view the fullscreen demo here.

4. Integrate animations with infinite scrolling

The website motion reference has four different animations happening while the user is scrolling:

  • Movement on the z-axis: the image moves from the back to the front.
  • Bending on the z-axis: the image bends a little bit depending on its position.
  • Image scaling: the image scales slightly when moving out of the screen.
  • Image distortion: the image is distorted when we start scrolling.

My approach to implementing the animations was to use a calculation of the element position divided by the viewport height, giving me a percentage number between -1 and 1. This way I’ll be able to map this percentage into other values inside the ShaderMaterial instance.

  • -1 represents the bottom of the viewport.
  • 0 represents the middle of the viewport.
  • 1 represents the top of the viewport.
const percent = this.position.y / this.environment.height; 
const percentAbsolute = Math.abs(percent);

The implementation of the z-axis animation is pretty simple, because it can be done directly with JavaScript using this.position.z from Mesh, so the code for this animation looks like this:

this.position.z = map(percentAbsolute, 0, 1, 0, -50);

The implementation of the bending animation is slightly more complex, we need to use the vertex shaders to bend our PlaneBufferGeometry. I’ve choose distortion as the value to control this animation inside the shaders. Then we also pass two other parameters distortionX and distortionY which controls the amount of distortion of the x and y axis.

this.material.uniforms.distortion.value = map(percentAbsolute, 0, 1, 0, 5);
uniform float distortion;
uniform float distortionX;
uniform float distortionY;

varying vec2 vUv;

void main() {
  vUv = uv;

  vec3 newPosition = position;

  // 50 is the number of x-axis vertices we have in our PlaneBufferGeometry.
  float distanceX = length(position.x) / 50.0;
  float distanceY = length(position.y) / 50.0;

  float distanceXPow = pow(distortionX, distanceX);
  float distanceYPow = pow(distortionY, distanceY);

  newPosition.z -= distortion * max(distanceXPow + distanceYPow, 2.2);

  gl_Position = projectionMatrix * modelViewMatrix * vec4(newPosition, 1.0);
}

The implementation of image scaling was made with a single function inside the fragment shader:

this.material.uniforms.scale.value = map(percent, 0, 1, 0, 0.5);
vec2 zoom(vec2 uv, float amount) {
  return 0.5 + ((uv - 0.5) * (1.0 - amount));
}

void main() {
  // ...

  uv = zoom(uv, scale);

  // ...
}

The implementation of distortion was made with glsl-noise and a simple calculation displacing the texture on the x and y axis based on user gestures:

onTouchStart() {
  TweenMax.to(this.material.uniforms.displacementY, 0.4, {
    value: 0.1
  });
}

onTouchEnd() {
  TweenMax.killTweensOf(this.material.uniforms.displacementY);

  TweenMax.to(this.material.uniforms.displacementY, 0.4, {
    value: 0
  });
}
#pragma glslify: cnoise = require(glsl-noise/classic/3d)

void main() {
  // ...

  float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);

  uv.x += noise * displacementX;
  uv.y += noise * displacementY;

  // ...
}

And that’s our final code of the fragment shader merging all the three animations together.

#pragma glslify: cnoise = require(glsl-noise/classic/3d)

uniform float alpha;
uniform float displacementX;
uniform float displacementY;
uniform sampler2D image;
uniform vec2 imageResolution;
uniform vec2 resolution;
uniform float scale;
uniform float time;

varying vec2 vUv;

vec2 zoom(vec2 uv, float amount) {
  return 0.5 + ((uv - 0.5) * (1.0 - amount));
}

void main() {
  vec2 ratio = vec2(
    min((resolution.x / resolution.y) / (imageResolution.x / imageResolution.y), 1.0),
    min((resolution.y / resolution.x) / (imageResolution.y / imageResolution.x), 1.0)
  );

  vec2 uv = vec2(
    vUv.x * ratio.x + (1.0 - ratio.x) * 0.5,
    vUv.y * ratio.y + (1.0 - ratio.y) * 0.5
  );

  float noise = cnoise(vec3(uv, cos(time * 0.1)) * 10.0 + time * 0.5);

  uv.x += noise * displacementX;
  uv.y += noise * displacementY;

  uv = zoom(uv, scale);

  gl_FragColor = vec4(texture2D(image, uv).xyz, alpha);
}

You can also view the fullscreen demo here.

Photos used in examples of the article were taken by Willian Justen and Azamat Zhanisov.

Conclusion

We hope you liked the Case Study we’ve written together, if you have any questions, feel free to ask us on Twitter or Instagram (@brunoarizio and @lhbzr), we would be very happy to receive your feedback.

Case Study: Portfolio of Bruno Arizio was written by Bruno Arizio and published on Codrops.

Viewing all 1537 articles
Browse latest View live