Today I Learned

Visual Studio Code debugging configuration for React

Hi, I want to share my proposition of config for debugging JavaScript in vs code. The first config is for debugging files in React JS
in my opinion, the most important line is “url” and “skipFiles” url: includes your application starting point by default is on port 3000 skipFiles: this is the place where you inform the debugger what location should skip.

{
 "version": "0.2.0",
  "configurations": [
{
 "type": "chrome",
 "request": "launch",
 "name": "React in browser",
 "url": "http://localhost:3000",
 "webRoot": "${workspaceFolder}",
 "skipFiles": [
   "${workspaceFolder}/node_modules/**/*.js",
   "${workspaceFolder}/yourLibToSkip/**/*.js",
   "<node_internals>/**/*.js"
   ]
  }
 ]
}

The second config includes NEXT server-side, NEXT client-side, and NEXT full-stack.

{
"version": "0.2.0",
 "configurations": [
{
 "name": "Next.js: debug server-side",
 "type": "node-terminal",
 "request": "launch",
 "command": "npm run dev"
},
{
 "name": "Next.js: debug client-side",
 "type": "chrome",
 "request": "launch",
 "url": "http://localhost:3000"
},
{
 "name": "Next.js: debug full stack",
 "type": "node-terminal",
 "request": "launch",
 "command": "npm run dev",
 "internalConsoleOptions": "openOnSessionStart",
 "serverReadyAction": {
   "pattern": "started server on .+, url: (https?://.+)",
   "uriFormat": "%s",
   "action": "debugWithChrome"
    }
   }
  ]
 }

On those templates, you could build your debugger configuration. Have a great time with the debugger in vs code and for more info visit vs code debugger

How to make a hook run conditionally

Rules of hooks clearly state that hooks cannot exist inside conditional code. However there is a way to do this.

Example

const MyComponent = ({isLoading}) => {
  useEffect(() => {
    if (!isLoading) { // <= condition is inside hook
      // some behavior
    }
  }, [isLoading])

  const someCalculationResult = useMemo(() => {
    if (!isLoading) { // <= AGAIN condition is inside hook
      return calculationResult // perform calculation
    }
    return undefined
  }, [isLoading])

  if (isLoading) return <LoadingSpinner />

  return (
    <div>{someCalculationResult}</div>
  )
}

The hooks are begging to be put after if (isLoading) return <LoadingSpinner /> so there is no need to add if(!isLoading) conditions inside them.

The way to trigger hooks conditionally only when isLoading is false is to wrap them in component rendered conditionally.

const ConditionalHookWrapper = ({children}) => {
  // no need to assert !isLoading inside hooks
  useEffect(() => {
    // some behavior
  }, [])

  const someCalculationResult = useMemo(() => {
    return calculationResult // perform calculation
  }, [])
  
  return children({someCalculationResult})
}

const MyComponent = ({isLoading}) => {
  if (isLoading) return <LoadingSpinner />

  // ConditionalHookWrapper component is rendered only when isLoading is false
  return (
    <ConditionalHookWrapper>
      {({ someCalculationResult }) => (
        <div>{someCalculationResult}</div>
      )}
    </ConditionalHookWrapper>
  )
}

In this example is shown additional situation, when we need to get some data from the conditional component. If that would not be the case then

const ConditionalHookWrapper = () => {
  // no need to assert !isLoading inside hooks
  useEffect(() => {
    // some behavior
  }, [])

  return null
}

const MyComponent = ({isLoading}) => {
  const [someValue, setSomeValue] = useState()

  if (isLoading) return <LoadingSpinner />

  return (
    <>
      <ConditionalHookWrapper/>
      <div>{someValue}</div>
    </>
  )
}

What to do when you commit secret in git

It depends.

If you made a commit just now

Remove the commit using BFG or filter-branch

If you have pushed the commit to repository

CHANGE THE SECRET!

All secrets that get pushed to remote repository should be treated as compromised and you cannot be 100% sure it haven’t been pulled by somebody else. github docs

You should still cleanup your commit using methods above to prevent confusion among other devs if they stumble upon the secret in the codebase (even if it’s already changed, they might not know about it)

It’s better to prevent

Git secrets prevents you from commiting secrets https://github.com/awslabs/git-secrets


Github docs

Singleton can't be dumped error

when passing an array of objects to a serializer, I’ve received an error which stated: Singleton can't be dumped. To fix the issue I’ve created a presenter for a singleton object:

  class StateTransitionPresenter
    attr_reader :state_transition

    def initialize(state_transition)
      @state_transition = state_transition
    end

    def as_json(options = {})
      {
        state_transition_time: state_transition.time,
        state: state_transition.state,
        reason: state_transition.reason
      }
    end
  end

and then I used it in the serializer:

order.state_transitions.map{ |state_transition| StateTransitionPresenter.new(state_transition)}.as_json

Typeorm - running migrations in separate transactions

IMPORTANT! Be careful when applying this technique using MySQL database, because some operations there have implicit commit, for example CREATE TABLE or ALTER TABLE. This means schema changes won’t be rolled back even if something later in transaction fails. Read more about implicit commits and transactional DDL in different database engines.
Example below uses PostgreSQL database where DDL changes do not cause auto-commiting.

Let’s say we have a project with hundreds of migrations, and for some reason you have to rebuild the whole schema from ground-up.

By default TypeORM runs all migrations in a single transaction so if something near the end fails, the whole progress is lost and you have to rerun all of them again.

Fortunately, there is a way to instruct TypeORM to run each migration in a separate transaction:

typeorm migration:run -t each

What’s interesting, typing typeorm migration:run -t in terminal won’t give us a list of options, and documentation also doesn’t specify them (at least I couldn’t find it).

We can inspect list of available options in node_modules/typeorm/commands/MigrationGenerateCommand.js

switch (args.t) {
  case "all":
    options.transaction = "all";
    break;
  case "none":
  case "false":
    options.transaction = "none";
    break;
  case "each":
    options.transaction = "each";
    break;
  default:
    // noop
}

Let’s try it out:

First, we generate a migration that adds a Customer table, notice we have unique index on name:

 export class addCustomer1657889795075 implements MigrationInterface {
  name = 'addCustomer1657889795075';

  public async up(queryRunner: QueryRunner): Promise<void> {
    await queryRunner.query(
      `CREATE TABLE "customer" ("id" SERIAL NOT NULL, "name" character varying NOT NULL, CONSTRAINT "PK_a7a13f4cacb744524e44dfdad32" PRIMARY KEY ("id"))`,
    );
    await queryRunner.query(
      `CREATE UNIQUE INDEX "IDX_ac1455877a69957f7466d5dc78" ON "customer" ("name") `,
    );
  }

  public async down(queryRunner: QueryRunner): Promise<void> {
    await queryRunner.query(
      `DROP INDEX "public"."IDX_ac1455877a69957f7466d5dc78"`,
    );
    await queryRunner.query(`DROP TABLE "customer"`);
  }
}

Next, let’s define a migration that fails - let’s say we insert duplicate name to customer table:

 export class addDefaultsToCustomer1657889827769 implements MigrationInterface {
  public async up(queryRunner: QueryRunner): Promise<void> {
    await queryRunner.query(`INSERT INTO "customer" ("name") VALUES ('Selleo')`);
    await queryRunner.query(`INSERT INTO "customer" ("name") VALUES ('Selleo')`);
  }

  public async down(queryRunner: QueryRunner): Promise<void> {
    await queryRunner.query(`DELETE FROM "customer"`);
  }
}

Let’s run it in normal mode with typeorm migration:run and inspect the output:

Migration "addDefaultsToCustomer1657889827769" failed, error: duplicate key value violates unique constraint "IDX_ac1455877a69957f7466d5dc78"

We were smart enough to know it will fail, inspecting the database schema in pgAdmin also tells us that much: no-changes

But if we run it with typeorm migration:run -t each we can see the first migration was applied, event though the second one failed:

The table was correctly added: changed-schema

And no data was inserted: no-data-inserted-proof

This way we can save ourselves some time when running a lot of migrations.

Avoiding CORS preflight for HTTP requests

Not every HTTP request sends CORS preflight.

For simple requests it is possible that preflight is not being sent at all.

Details are listed on MDN CORS documentation

In short request must be performed

  • with one of the methods below:

    • GET
    • HEAD
    • POST
  • with following additional headers (apart from standar user-agent ones)

    • Accept
    • Accept-Language
    • Content-Language
    • Content-Type

There are more limitations so I recommend reading through MDN docs

How not to check files with RuboCop

While using rubocop on the project it came out that a lot of fixes had to be done, not only those connected with my task.

So I came up with the idea to list and fix only these files that I have changed.

Two of them were:

- bin/start-dev
- config/config.yaml

Calling the command directly on those files:

rubocop bin/start-dev
rubocop config/config.yaml

forced rubocop to check them and throw offenses that were not supposed to appear despite the fact, these files were not included as files to be checked in:

rubocop/config/default.yml

So by default rubocop ignores such files.

Ordering by SELECT index

Usually, you will order your SQL results by column name for example:

SELECT id, first_name, last_name
FROM users
ORDER BY last_name

but you can achive simillar result by replacing last_name by the index position in SELECT part of the query

SELECT id, first_name, last_name
FROM users
ORDER BY 3

NOTE: index starts with 1 instead of 0

Persisting custom methods in ruby ​​interpreter

To enhance your interpreter you may add some custom method or functions to your .irbrc or .pryrc files that usually can be found in your home directory ~/.irbrc, ~/.pryrc

How example ~/.pryrc file may look like:

def v_trace
  caller.select { |x| x.include?(Rails.root.to_s) }
end

class Object
  def m?(method_name)
    self.methods.grep /#{method_name}/
  end
end

After saving the above changes, run pry command and take advantage of new features

NOTE:

Extracting substring using regexp

If you want to grab some part of the string using a regular expression there is a high chance that you are using match or scan method

pry(main)> '$$$ Tony Stark $$$'.scan(/\w+ \w+/)[0] # -> Tony Stark

pry(main)> '$$$ Tony Stark $$$'.match(/(\w+ \w+)/)[1] # -> Tony Stark

next time try this

pry(main)> '$$$ Tony Stark $$$'[/(\w+ \w+)/] # -> Tony Stark

More advanced examples

pry(main)> '$$$ Tony Stark $$$'[/\${3}\s(\w+\s\w+)\s\${3}/, 1] # -> Tony Stark

pry(main)> '$$$ Tony Stark $$$'[/\${3}\s(?<name>\w+\s\w+)\s\${3}/, :name] # -> Tony Stark

as always visit www.crystular.org to validate your regexp

Native <select> dropdown behavior

In case anybody wondering why native <select> dropdown opens in random directions (sometimes upwards, sometimes downwards) it just opens in the way so currently selected option is already on the cursor level.

It means when first option is selected, dropdown opens downwards, when last - upwards. And somewhere in the middle when some other option is selected. See image:

easter egg here

One liner for recursively transforming to OpenStruct

hash = {
  name: "Bob",
  school: {
    name: "RSpec school",
    level: "secondary",
    region: {
      countryCode: "pl",
      region: "Bielsko"
    }
  }
}

JSON.parse(hash.to_json, object_class: OpenStruct)
=> #<OpenStruct name="Bob", school=#<OpenStruct name="RSpec school", level="secondary", region=#<OpenStruct countryCode="pl", region="Bielsko">>>

and here’s benchmark if you’re interested

require 'benchmark'
require 'json'

def build_nested_hash
  {
    name: 'Bob',
    school: {
      name: 'RSpec school',
      level: 'secondary',
      region: {
        countryCode: 'pl',
        region: 'Bielsko'
      }
    }
  }
end

n = 100_000

Benchmark.bm do |benchmark|
  benchmark.report('OpenStruct') do
    n.times do
      JSON.parse(build_nested_hash.to_json, object_class: OpenStruct)
    end
  end

  benchmark.report('Hash') do
    n.times do
      JSON.parse(build_nested_hash.to_json)
    end
  end
end
 $ ruby benchmark.rb
               user     system      total        real
OpenStruct  4.118626   0.010050   4.128676 (  4.129240)
Hash        0.986823   0.002627   0.989450 (  0.989538)

As you can see it’s quite slower than parsing to hash, but still rather acceptable

tested on ruby 2.6.5(M1)

Skip graphQL field based on variable

GraphQL allows to conditionally include or exclude some fields using @skip and @unless keywords.

example:

query(
  $limit: Int
  $offset: Int
  $withExpiredBookings: Boolean = false // default value
  // $withoutExpiredBookings: Boolean = true
) {
  users(
    limit: $limit
    offset: $offset
  ) {
    totalCount
    nodes {
      id
      name
      email
      hasBookingsAfterDocumentsExpiration @include(if: $withExpiredBookings)
      // or
      // hasBookingsAfterDocumentsExpiration @skip(if: $withoutExpiredBookings)
      region {
        id
        name
        timezone
        countryCode
      }
    }
  }
}

// hook usage
useQuery(USERS_QUERY, {
  variables: {
    withExpiredBookings: countryCode === 'gb',
  },
}),

// HOC usage
export default graphql(USERS_QUERY, {
  options: (ownProps) => ({
    variables: {
      withExpiredBookings: ownProps.countryCode === 'gb',
    },
  }),
})(Component),

See more at: https://www.apollographql.com/docs/apollo-server/schema/directives/

Creating array without parentheses

One of the unique features in ruby is that you don’t need to use parentheses(most of the time). For example, you can skip them when you are defining or invoking a method.

What I didn’t know was that you don’t need brackets while creating a new array

pry(main)> new_array = 1, 2, 3 # => [1, 2, 3]

More examples:

pry(main)> new_array = 1, second = 2, third = 3 # => new_array = [1, 2, 3]; second = 2; third = 3
pry(main)> new_array = first = 1, second = 2, third = 3 # => new_array = [1, 2, 3]; first = 1; second = 2; third = 3

Iterm triggers

Iterm terminal has a cool feature called Trigger that can be found in Preferences/Profiles/Advances/Triggers. What it allows you to do is to perform some actions when a certain text is displayed in your terminal panel.

For example you can highlight line that contains word Error so you won’t miss it in your dev logs.

setup usage

NOTE: Keep in mind that highlighting is only one of many actions that you can take.

React Query with TypeScript

Types for Post resource

// types

export type Post = {
  id: number;
  title: string;
  description: string;
};

export type PostFormData = Omit<Post, 'id'>;

export type PostQueryKey = ['post', { postId: number }];

export type FetchPost = {
  queryKey: PostQueryKey;
};
// api/posts/requests.ts

export const fetchPosts = (): Promise<Post[]> => client.get('/posts');

export const fetchPost = ({ queryKey: [, param] }: FetchPost): Promise<Post> =>
  client.get(`/posts/${param.postId}`);

export const createPost = (data: PostFormData) => client.post('posts', data);

export const editPost = (data: Post) => client.put(`/posts/${data.id}`, data);

export const deletePost = (postId: number) => client.delete(`/posts/${postId}`);
// api/posts/selectors.ts

export const getPosts = (data: any): Post[] => data.data;
export const getPost = (data: any): Post => data.data;
// api/shared.ts

type SelectorsMap = {
  [key: string]: (...arg: any[]) => any;
};

export function handleSelectors<T extends SelectorsMap, K extends keyof T>(
  selectors: T,
  ...additionalParams: any[]
) {
  return function selectorResult(rawData: any) {
    const keys = Object.keys(selectors) as K[];
    const initialData = {} as {
      [Key in K]: ReturnType<T[Key]>;
    };

    return keys.reduce((acc, selectorName) => {
      const selector = selectors[selectorName];

      acc[selectorName] = selector(rawData, additionalParams);
      return acc;
    }, initialData);
  };
}
// api/posts/hooks.ts

export const useGetPosts = ({
  selectors = { posts: getPosts },
  ...options 
} = {}) =>
  useQuery('posts', fetchPosts, {
    select: handleSelectors(selectors),
    ...options,
  });

export const useGetPost = ({
  postId = 0,
  selectors = { post: getPost },
  ...options
} = {}) => {
  const queryKey: PostQueryKey = ['post', { postId }];

  return useQuery(queryKey, fetchPost, {
    select: handleSelectors(selectors),
    ...options,
  });
};

export const useCreatePost = (options = {}) =>
  useMutation(createPost, {
    mutationKey: 'createPost',
    ...options,
  });

export const useEditPost = (options = {}) =>
  useMutation(editPost, {
    mutationKey: 'editPost',
    ...options,
  });

export const useDeletePost = (options = {}) =>
  useMutation(deletePost, {
    mutationKey: 'deletePost',
    ...options,
  });

How to setup HTTPS on CRA with Craco

There are tutorials for setting up HTTPS for CRA like this one but if you are using craco to override webpack config, then setting SSL_CRT_FILE=./.cert/cert.pem SSL_KEY_FILE=./.cert/key.pem env variables does not properly link the cert and key files. In order to do so you need to add below config to craco.config.js

module.exports = {
  devServer: {
    https: {
      key: fs.readFileSync('./.cert/key.pem'),
      cert: fs.readFileSync('./.cert/cert.pem'),
    },
  },
}

craco docs

webpack docs

Kudos to TRomik who helped me with that.

Remove/change modifiers of an existing regexp

The following regexp has a couple of modifier:

const regexp = /my-project-id/gi
// -> /my-project-id/gi

to reuse the existing regexp and remove the modifier, create a new instance of the regexp with a blank modifier:

const regexp2 = new RegExp(regexp, '')
// -> /my-project-id/

You can also omit modifiers from existing ones specified in the regexp:

const regexp3 = new RegExp(regexp, regexp.flags.replace('g', ''))
// -> /my-project-id/i

Faster E2E tests & stable DB setup in NestJS

Link to earlier post on E2E tests in NestJS

The following setup allowed to cut down the duration of E2E tests by 2/3 (from 356s to 111s). The app uses TypeORM.

A single app instance for the whole E2E run.

File: test/utils/create-testing-module.ts

// Single app instance
let app: INestApplication

export async function createTestingModule() {
  const moduleBuilder = Test.createTestingModule({
    imports: [AppModule],
  })

  const module = await moduleBuilder.compile()

  app = module.createNestApplication(undefined, {
    logger: false,
  })

  await app.init()
}

export async function closeTestingModule() {
  await getConnection().dropDatabase()

  if (app) await app.close()
}

export function getTestingModule() {
  if (!app) throw 'No app was initialized!!!'

  return app
}

Functions to drop & clean up the DB:

File test/utils/clean-up-db.ts - a collection of functions to drop/clean up the DB:

const tableNames = [
  'contact',
  'user',
]

export async function cleanUpDb() {
  const connection = getConnection()

  for (const tableName of tableNames) {
    await connection.query(`DELETE FROM ${tableName};`)
  }
}

export async function dropTables() {
  const connection = await createConnection({
    type: 'mysql',
    username: process.env.TYPEORM_USERNAME,
    password: process.env.TYPEORM_PASSWORD,
    database: process.env.TYPEORM_DATABASE,
  })

  await connection.query('SET FOREIGN_KEY_CHECKS=0;')
  for (const tableName of tableNames) {
    await connection.query(`DROP TABLE IF EXISTS ${tableName};`)
  }

  await connection.close()
}

Hooks to bootstrap the app and clean up the DB between executions:

File jest.e2e-setup.ts - to be included to the jest configuration:

beforeAll(async () => {
  await dropTables()
  await createTestingModule()
})

afterAll(async () => {
  await closeTestingModule()
})

beforeEach(async () => {
  await cleanUpDb()
})

SQL migrations in TypeORM before a test suite

The setting is via migrationsRun. In this way, TypeORM runs SQL migrations.

App instance for test cases

describe('TagResolver (E2E)', () => {
  let app: INestApplication
  let userModel: UserModel

  beforeEach(async () => {
    app = getTestingModule()

    tagModel = app.get<TagModel>(TagModel)
  })

  it('verifies the app', () => {
    // ...
  })
})

Debug draft state in redux-toolkit (immer.js)

When using console.log or debugger to display draftState in redux-toolkit reducer we normally get Immer.js proxy object which contains lots of unnecessary information. (Immer.js is used by redux-toolkit by default)

    <ref *1> {
      type_: 0,
      scope_: {
        drafts_: [ [Circular *1], [Object], [Object] ],
        parent_: undefined,
        canAutoFreeze_: true,
        unfinalizedDrafts_: 0
      },
      modified_: true,
      finalized_: false,
      assigned_: {},
      parent_: undefined,
      base_: {
        effects: {
          createEnrollmentBatch: [Object],
          updateEnrollmentBatch: [Object],
          deleteEnrollmentBatch: [Object],
          loadEnrollmentList: [Object]
        },
        enrollmentsByQstreamId: {},
        companyEnrollmentStausByQstreamId: {}
      },
      draft_: [Circular *1],
      revoke_: [Function (anonymous)],
      isManual_: false
      // and many more
    }

To display the actual data use current function imported from @redux/toolkit

import { createSlice, current } from '@reduxjs/toolkit'
...
console.log(current(draftState))

In the log output we’ll see only our data.

    {
      effects: {
        createEnrollmentBatch: { results: [], status: 'NOT_BATCHING', isEnrollMe: null },
        updateEnrollmentBatch: { results: [], suspended: null, status: 'NOT_BATCHING' },
        deleteEnrollmentBatch: { results: [], status: 'NOT_BATCHING' },
        loadEnrollmentList: { status: 'LOAD_SUCCESSFUL', error: false }
      },
      enrollmentsByQstreamId: { '3': {} },
      companyEnrollmentStausByQstreamId: {}
    }

DOCUMENTATION

Aggregating failures in RSpec

Sometimes our examples have multiple independent expectations. In such cases, the default behavior of RSpec to abort on the first failure may not be ideal.

Consider the following example:

it do
  get('/api/v1/users')

  expect(response.status).to eq(200)
  expect(response.body).to eq('[{"name":"Johny"}]')
end

If our API returns a wrong status, RSpec will print the following output:

1) Users GET /api/v1/users example at ./spec/requests/api/v1/users_spec.rb:9
   Failure/Error: expect(response.status).to eq(200)
   
     expected: 200
          got: 201

While this gives us feedback on the response’s status being wrong, it entirely skips the assertion on the response’s body, even though having both results could make debugging easier.

RSpec has a neat solution to this: Aggregating Failures.
To use it, you can either tag the whole example with :aggregate_failures:

it 'does something', :aggregate_failures do
  ...
end

Or you can just wrap your assertions in an aggregate_failures block:

it do
  get('/api/v1/users')

  aggregate_failures do
    expect(response.status).to eq(200)
    expect(response.body).to eq('[{"name":"Johny"}]')
  end
end

This will change RSpec’s default behavior and will group both expectations:

1) Users GET /api/v1/users example at ./spec/requests/api/v1/users_spec.rb:9
   Got 2 failures from failure aggregation block.

   1.1) Failure/Error: expect(response.status).to eq(200)
        
          expected: 200
               got: 201

   1.2) Failure/Error: expect(response.body).to eq('[{"name":"Johny"}]')
        
          expected: "[{\"name\":\"Johny\"}]"
               got: "[{\"name\":\"Jane\"}]"

More info: documentation

Run cleanup logic conditionally

Running some logic conditionally on component unmount requires usage of useRef hook.

We cannot do:

useEffect(() => {
  return () => {
    if (status === "explanation") {
      console.log("Trigger logic");
    }
  };
}, []);

Because status coming from the useState hook would not be properly updated due to stale closure


We also cannot add status to dependency array

useEffect(() => {
  return () => {
    if (status === "explanation") {
      console.log("Trigger logic");
    }
  };
}, [status]);

Because the effect would be triggered every time the status changed (not only on unmount)


We need to useRef to smuggle the status value into useEffect without triggering it on status change.

  useEffect(() => {
    return () => {
      if (statusRef.current === "explanation") {
        console.log("Trigger logic");
      }
    };
  }, [statusRef]);

CodeSandbox example

React-Select dropdown z-index

Sometimes I have a problem with z-index of a dropdown, mainly when the dropdown is inside a modal - not all options are visible because the modal is too small.

Instead of trying to fix it using CSS z-index property, Select component from react-select provides menuPortalTarget prop - example value: document.body

Now dropdowns are fully visible

Optional chaining

I used to use lodash to access chained properties, to prevent app from crashing when one of the properties in the chain was undefined. Bartek Boruta has shown me a nice trick, which I known from other languages, but I was not sure if it exists in JS.

Instead of:

const city = get(user, 'address.city')

you can use

const city = user?.address?.city,

which will give the same result - city’s name or an undefined property.

Preserve console logs

Debugging code in javascript might be tricky sometimes. Especially when we want to log into console events that triggers page reload. But we can preserve logs. For example in Chrome we go into console -> console settings (this lower one button) -> preserve log

image

Chrome dev tools will show all logs and only inform us by logging image info that it changed page.

Implement moving array pointer index.

When you have an array with index pointing to eg. selected item and this index should move.

Solutions I often saw take multiple lines and if statements to prevent index from pointing out of an array.

However there is a one liner way.

// vector is 1 or -1 depending on user interaction

return (currentIndex + arrayLength + vector) % arrayLength;

To better illustrate I have made code sandbox example.

Accessing request in the validate of JWT strategy

The default definition for the JwtStrategy offers to pass the payload parameter to the validate function:

import { ExtractJwt, Strategy } from 'passport-jwt';
import { PassportStrategy } from '@nestjs/passport';
import { Injectable } from '@nestjs/common';
import { jwtConstants } from './constants';

@Injectable()
export class JwtStrategy extends PassportStrategy(Strategy) {
  constructor() {
    super({
      jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
      ignoreExpiration: false,
      secretOrKey: jwtConstants.secret,
    });
  }

  async validate(payload: any) {
    return { userId: payload.sub, username: payload.username };
  }
}

There are sometimes cases where the validate fn should receive the request object. To have this possibility, specify the passReqToCallback to true:

    super({
      jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
      ignoreExpiration: false,
      secretOrKey: jwtConstants.secret,
      passReqToCallback: true // <-----
    });

In that way, the validate function will firstly receive request and secondly jwtPayload:

  async validate(request: Request, payload: any) {
    // do something with the request
    return { userId: payload.sub, username: payload.username };
  }

Pass hex color to fill svg in background url (SCSS)

Passing svg into background url might be useful for eg. in case of dropdown arrow selects.

but there is a problem if you want to pass dynamically color to fill that svg. url is not accepting # from that hex.

I found that if we use

@use "sass:string";
@use "sass:color";

we can build something similar to encodeURI

@function escape-url-hex-color($hex_color) {
  $ie-color-string: color.ie-hex-str($hex_color);
  $string: string.quote($ie-color-string);
  $color: string.slice($string, 4);
  @return '%23' + $color;
}

then we can easily use it to set color variable and pass it to fill attribute.

Creating a type that requires alternative fields

In order to create a type that should require one of the alternative fields be required, use the union with alternative. For instance, the following type requires a person to have either socialSecurityNumber or dateOfBirth present:

type Person = {
  name: string
} & (
  | { socialSecurityNumber: string }
  | { dateOfBirth: string}
)

The first part contains a standard set of fields:

type Person = {
  name: string
} 

that is combined with two alternative types using the union (&) and alternative (|) signs:

& (
  | { socialSecurityNumber: string}
  | { dateOfBirth: string}
)

Then these examples are valid:

const simon: Person = {
  name: 'simon',
  socialSecurityNumber: 'ssn'
}

const peter: Person = {
  name: 'peter',
  dateOfBirth: '01.01.1901'
}

const pete: Person = {
  name: 'peter',
  dateOfBirth: '01.01.1901',
  socialSecurityNumber: 'ssn'
}

But an object containing just name will generate an error:

const invalidPerson: Person = {
  name: 'peter',
}

Type '{ name: string; }' is not assignable to type 'Person'.
  Type '{ name: string; }' is not assignable to type '{ name: string; } & { dateOfBirth: string; }'.
    Property 'dateOfBirth' is missing in type '{ name: string; }' but required in type '{ dateOfBirth: string; }'.

Make sure borders are not doubled in list

When I had a static table (made with divs, because every field was a separate page on mobile), borders were no problem. However, our’s client designer decided it would be nice to indicate required fields with red border and pink background. It caused me some trouble due to doubled borders.

Screenshot-2021-09-08-at-10-15-01

My field component looks like that Screenshot-2021-09-08-at-14-05-38

So I could use adjacent sibling selector to check whether before field I had a field with error. If it’s true, then I hide top border since it’s already present for the required field. Screenshot-2021-09-09-at-08-37-36

Now it works!

Screenshot-2021-09-08-at-10-15-13