Customer authentication
Last updated
Last updated
We will start off by creating customer service. This service will be responsible for customer registration, authentication, and handling of payments.
Customer service can be created by executing the following command
Once the service is generated, we can verify that the configuration has been successfully generated by deploying it locally using the CLI
The CLI will setup a fresh minikube cluster, install RabbitMQ, Redis, and PostgreSQL, and deploy newly generated customer service.
Once the CLI job is complete, we can verify the local deployment using the kubectl
tool.
The command response should look something like
With the customer service generated and successfully deployed, we can proceed to adding customer service logic.
All code for customer authentication logic can be seen in the pull request above.
When creating customer, plaintext password is saved to the database. Please note that this was done on purpose, to keep the demo project simple.
Authentication library is a util library generated by util generator. You can see that its not located under the same path as service based libraries but rather under lib/api/utils
folder. You can generate util library by executing the following command
If you look into the apps/api/customer/migrations
folder, you will see that the is Customer migration file there
This file was generated using migration generator. Every time an Entity that is connected with the specific service is updated or added, you can run the migration generator to generate files such as this one. You can run migration generator by executing the following command
For this command to work, the service should have working database connection setup. By default, generated services connection will be pointing to the internal cluster postgres endpoint. This connection only works when service is deployed inside the Minikube cluster, thats why we will have to change that in order to run the migrations.
You can do that in two different ways
Install PostgreSQL instance locally, and changing the environment variables to point to your local instance. Please keep in mind that you will have to revert the changes to environment variables after the migrations are generated, in order for database connection to work inside the local cluster.
If you have already ran the Microservice stack local deployment tool and you have Minikube cluster setup, you can use internal PostgreSQL instance to run the migration generator. To do that you will have to change the database connection host value inside the .env
file to localhost
and then expose PostgreSQL instance via the kubectl port-forward postgresql-0 5432:5432
command. Same as above, after generating the migration, the database connection host value should be reverted in order for service to start inside the local cluster.
With all of the business logic added to the service, it is almost ready to be deployed. Before deploying, we should update the ingress values that are located inside the infrastructure/ingress-values.yaml
file. We can add the endpoint->service mapping there.
Once the ingress config is set, we can deploy it using the deployment CLI.
With the service ready for deployment, we can update the local deployment using the CLI command
Once the command is executed, we can validate the deployment using the kubectl get pods
command. Service should be up and running. You can view the internal service logs by using kubectl logs <pod-name>
command.
Now that the service is deployed, we can expose the ingress controller to your local address and test the customer service.
To expose the ingress controller we first have to find out the ingress controller pod name by executing the following command
The response should look something like this
We can now take the ingress controller name and insert it into the port-forward command the same way we did PostgreSQL earlier, to expose it to your local address.
This command will expose the service to port 3000. We can now test the authentication endpoints.
Lets start by creating a new customer account
If successful, the response should include the access token. Lets copy the access token and use it call the v1/customer/me
endpoint to validate that the authentication guards are working.
With the authentication working, we can proceed to adding item ordering functionality!