How to check if item in list/array (Python and Javascript)

I’m going to try something new this week. I’ve always had the intention of writing down a series of common syntax for Python and Javascript. Because I keep looking them up anyway. So I thought, this would be a good opportunity to try writing them out as two separate Series. One for the Weekly Post and one for How to X in Python and Javascript. In this post, I will touch on how to find an item in a list.

Situation: You have a list (Python’s preferred term) or array (Javascript’s preferred term) of items. For a simple example, let’s just use a list of numbers [1, 2, 3]. You want to find out if the number 3 is in the list. Hence, you need a boolean result.

Python

3 in [1, 2, 3] 

Javascript

arr = [1, 2, 3]
arr.includes(3)

This is post #5 in my quest for publishing weekly.

Photo by Sophia Baboolal on Unsplash

How to Restore Database Dumps for Postgres in Docker Container

Before 2018, I largely use the following technologies. MySQL for database and VirtualBox and Vagrant for running my environments. Starting in mid 2018, I moved towards using Postgres and Docker instead. This is especially the case when I paid to learn about cookiecutter-django. Adding Postgres and Docker to my toolbox has increased my effectiveness in my software business. However, this also means I need to figure out regularly used tasks involving these new technologies. Some frequent tasks includes backing up and restoring database dumps. Therefore, this article is about how to restore database dumps for Postgres running inside Docker containers. I’ll write the counterpart article to the backup process on a separate day and add the link to it when it’s ready.

Key Insights You Need to Know About Docker Containers

Before I go into the step by step about the backup and restore database dumps, you need to first acquire some fundamental insights of the Postgres and Docker technologies.

  1. Firstly, Docker containers have their own volumes. Think of them like the disk volumes in your host system.
  2. Next, realize that executing commands inside the docker container from your host system is possible. Which is to run docker exec <container_name> <your_command>
  3. When you want to run certain commands within the containers and these commands need to interact with certain files, the assumption is that these files are found in the docker container’s own volumes.

Therefore, when you execute the typical Postgrespg_restore commands, the database dumps will need to be in the docker container’s volumes. This is key.

There are several ways to achieve the transfer of files between your host system and the docker container. Similarly, there are several ways to run backup and restore. To keep things simple for beginners, I will only state one way to accomplish this. It doesn’t mean my one way is the best way. But, keeping it to only one way makes it easy for beginners to follow. Moreover, it’s easy for me to update the content here as time goes by.

How to Restore Data Dump Using pg_restore

Step 1: Find the name and id of the Docker container hosting the Postgres instance

Turn on your Docker and run the docker ps command to locate the name and id of the Docker container. Which leads to the following.

$ docker ps

CONTAINER ID   ...                  NAMES
 abc985ddffcf  ...              my_postgres_1

Step 2: Find the volumes available in the Docker container

Run the command docker inspect -f '{{ json .Mounts }}' <container_id> | python -m json.tool

Then, look at the volume paths under the key Destination.
You should get the following:

$ docker inspect -f '{{ json .Mounts }}' abc985ddffcf | python -m json.tool
[
    {
        "Type": "volume",
        "Name": "my_postgres_backup_local",
        "Source": "/var/lib/docker/volumes/my_postgres_backup_local/_data",
        "Destination": "/backups",
        "Driver": "local",
        "Mode": "rw",
        "RW": true,
        "Propagation": ""
    },
    {
        "Type": "volume",
        "Name": "my_postgres_data_local",
        "Source": "/var/lib/docker/volumes/my_postgres_data_local/_data",
        "Destination": "/var/lib/postgresql/data",
        "Driver": "local",
        "Mode": "rw",
        "RW": true,
        "Propagation": ""
    }
]

In this case, we have /backups and /var/lib/postgresql/data as the volume paths.

Step 3: Copy dump into one of the volumes

Pick a volume and copy your dump in. Run docker cp </path/to/dump/in/host> <container_name>:<path_to_volume>

In my case, I pick the volume /backups. Which gives us the following.

$ docker cp my_data.dump my_postgres_1:/backups

Step 4: Get the database owner to run pg_restore command

Execute the pg_restore command via the docker exec command. Which means the generic forms of both commands are the following.

For pg_restore:

pg_restore -U <database_owner> -d <database_name> <path_to_dump>

For docker exec:

docker exec <container_name> <some_command>

Sometimes, you don’t know who the database owner is. This is purely optional if you already know. In any case, you can find the owner by retrieving the list of databases and their owners. That means a psql -U postgres -lcommand. Which you have to run within the docker exec command as well. Therefore, we get the following.

docker exec my_postgres_1 psql -U postgres -l

          List of databases
        Name        |  Owner   
--------------------+----------
 some_database      | postgres 

After I have all the information I need, I’m ready to run pg_restore. Then, this becomes the following.

docker exec my_postgres_1 pg_restore -U postgres -d some_database /backups/my_data.dump

Conclusion

I started by covering some key fundamentals about Docker and Postgres. Following which, I went into code level details about the commands to restore your Postgres data dump in a Docker container for Postgres.

Now, I end with a summary of the four steps to do so.

How to restore Postgres data dump in a Docker container using pg_restore

  1. Find the name and id of the Docker container hosting the Postgres instance

    Turn on Docker and run docker ps to see the list of containers and their names and ids.

  2. Find the volumes available in the Docker container

    Run docker inspect -f '{{ json .Mounts }}' <container_id> | python -m json.tool

  3. Copy the dump from your host system to one of the volumes

    Run docker cp </path/to/dump/in/host> <container_name>:<path_to_volume>

  4. Execute pg_restore via docker exec command

    docker exec <container_name> pg_restore -U <database_owner> -d <database_name> <path_to_dump>

Problems? Errors? Leave a comment below to let me know if it works for you.


This is post #4 in my quest for publishing weekly.

Photo from Youtube

The Year of Consistency

There’s a famous quote by Woody Allen that 80% of success is just showing up. That’s a great quote to help people get started. Once people get started, the formula should be changed to 80% of success is showing up consistently. Just like there’s no one catch-all universal recipe for all people under all contexts, there’s probably no one single catch-all universal recipe for the same person under all stages of their growth. I’m now 37 and life experience tells me in general there are 3 or 4 stages of growth regardless of any fields. The first stage is simply to get good at starting and restarting. In Woody Allen parlance, get good at showing up.

80 percent of SUCCESS is showing up.

“Showing Up Is 80 Percent of Life – Quote Investigator.” Accessed January 6, 2019. https://quoteinvestigator.com/2013/06/10/showing-up/.

You will start-stop-start-stop-start-stop ad infinitum. Nobody tells you this. But it’s okay to start-stop-start-stop repeatedly at first. It’s also okay to change tactics to help you start, or rather restart more easily each time you stop. It’s confusing as a beginner and reading too many expert tips just adds to the friction of starting and re-starting. Floundering is normal. All you can do is begin again. Every stop is a pause for you to recalibrate and begin again faster, smarter, or better. Nobody builds muscles overnight. Training is merely one series of starting and stopping training sessions until you start to get the hang of it. But once you are past this Start-and-Restart Stage, you acquire a sense of familiarity with the new skill you’re learning. It’s time to go to the next stage. I call that next stage — the Consistency Stage.

2018 was my Start-Restart Year

2018 was a year where I had to start from scratch. I set out to build a new lifestyle with a new direction in my business. Looking back, I now realize 2018 is my personal Start-Restart Stage. I haven’t completed reviewing my 2018 and planning for my 2019 goals even though it’s already 1 week into the new year. Am I slow compared to everybody else posting their 2018 review posts and 2019 resolutions? Yes. But, I run my own race. I don’t run theirs and other people don’t run mine.

One thing I did get clarity in my still ongoing review and planning is that since 2018 is my Start-Restart Stage for a new me. 2019 is surely my Year of Consistency for the new habits and actions I’m adopting.

Previously, I didn’t care too much about how often I was doing the new habits I wanted to form. Or even how well I performed them. So long as I keep starting and restarting, then that’s all fine and dandy. Results started to change for me. I racked up more steady revenue and started getting fitter and healthier. Towards the end of 2018, I instinctively began to track performance metrics. From the number of hours I work, to the calories I eat. Slowly, I seem to gravitate towards being and acting more consistently.

Tracking my hours spent in the last week of 2018: I spent 45 hrs working in 2018 week 52
An example of tracking new metrics: my tracker on time spent in last week of 2018

With this in mind, the theme for 2019 would be the Year of Consistency. Meaning to say, simply showing up is the old benchmark. The new benchmark is how often I’m showing up. If this sounds vague for you, I agree. But I don’t really write for you. I write to clarify my own thinking. The good thing is I can feel both my emotional and rational sides are on board with choosing Consistency as the theme for 2019.

2019 – The Year of Consistency Targets

What does this mean? I’m making changes in a few areas all at once. And I’ll share more details when I get more clarity yet. Suffice to say that at the topmost of my mind are my business, health, and blogging goals. For example, I set myself a target to publish a new post every week. Tentatively, I am setting myself targets which I have 100% control over. These targets are subject to change as I gain clarity over them. As of the first Sunday 2019-01-06, the targets and the related areas are:

  1. Publish new post every week (blogging)
  2. Work a solid 45 hours and no more on average every week (business)
  3. Maintain less than 1300 calories and protein intake of 150gm every day (health + nurition)
  4. Cardio + mini workout or major workout in gym every day (health + fitness)
  5. Execute the Start-and-Restart method for new skills I want to acquire at least once every week (business + mental well-being + knowledge)
  6. Execute the Start-and-Restart method on the Big Promises I want to fulfil at least once every week (mental well-being)

Some of the items mentioned I deliberately obscure because I’m not quite ready to disclose the details. When I’m ready, they will be made known. My blogging method is heavily influenced by my software background. I believe in an attitude of permanent beta. I expect to be tweaking continuously over time even on posts that I wrote years before. Some of these targets might change drastically or slightly. It depends as I acquire new knowledge about myself and the greater world at large.

What about you? What’s your story for 2018 and 2019? Let me know.


This is post #3 in my quest for weekly publishing.

Photo by rawpixel on Unsplash

How to Clone Multiple GitHub Repos with Deploy Keys

You have a single user account (deploy-user) on a server instance and you want to deploy multiple GitHub repositories with the same deploy key. You successfully do that for the first repo repo-first. But when you try to do that for the second repo repo-second. GitHub stops you.

In fact, when you add the same deploy key to the second repo, GitHub shows you an error message that says, GitHub gives you an error message Error: Key already in use. In their documentation, they state that

Once a key has been attached to one repository as a deploy key, it cannot be used on another repository.

“Error: Key Already in Use – User Documentation.” Accessed January 2, 2019. https://help.github.com/articles/error-key-already-in-use/#deploy-keys.

Now you can take that key and add it to your user account in GitHub instead. But that would grant read and write access to ALL the repos for deploy-user.

That’s incredibly insecure. So what do you do if you want to have deployment for your repos in the same machine but using deploy keys?

Step 1: Create different key-pairs for different repos for same server user account

This step is pretty simple. I prefer to create the different keys like this

ssh-keygen -t rsa -b 4096 -C "repo-first@servername-deploy-user"
ssh-keygen -t rsa -b 4096 -C "repo-second@servername-deploy-user"

And then I typically copy out the public keys this way:

cat /home/deploy-user/.ssh/repo-first.pub

From this, I’ll copy and add the keys to the respective repos. Typically, I also disallow write access for these keys. This time, you should successfully add the deploy keys to both repos.

Step 2: Set up the SSH Configuration Per Repo

I’ll edit the ssh configuration this way. As the deploy-user, I run vim ~/.ssh/config. It opens up the configuration file and add the configuration like this:

Host alias-repo-first github.com
  Hostname github.com
  IdentityFile /home/deploy-user/.ssh/repo-first

Host alias-repo-second github.com
  Hostname github.com
  IdentityFile /home/deploy-user/.ssh/repo-second

Why do you need this? Because when you run the git clone command, git will automatically pick the default SSH key id_rsa to attempt the connection. Therefore, we need this configuration to get around this automatic selection of the SSH key.

Note: there’s a space between the alias and github.com for each set of configuration under the Host key.

Step 3: Verify the SSH Configuration

To test this works, exit the configuration file and type the following:

ssh -T git@alias-repo-first

You should see the following if successful:

Hi Organization/repo-first! You've successfully authenticated, but GitHub does not provide shell access.

Repeat the same for each repo’s configuration.

Step 4: Clone the Repo

This is the easiest step. Run the git clone command for each repo.

git clone git@alias-repo-first:Organization/repo-first.git

That should solve the situation of cloning multiple GitHub repos with purely deploy keys.

Conclusion

Just to summarize, these are the steps.

How to Clone Multiple GitHub Repos with Deploy Keys

  1. Create 1 pair of SSH key per repo

    For example, ssh-keygen -t rsa -b 4096 -C "repo-first@servername-deploy-user"

  2. Set up SSH config file

    Indicate which key-pair for which repo. Example,
    Host alias-repo-first github.com
    Hostname github.com
    IdentityFile /home/deploy-user/.ssh/repo-first

  3. Verify the configuration

    Test your configuration using ssh -T git@alias-repo

  4. Clone the repo

    Now the moment of truth. git clone git@alias-repo-first:Organization/repo-first.git

This is post #2 since I devote to publishing every week.

Photo by Brina Blum on Unsplash

Publish Every Week

Even though I am in the software business, I like to read widely and outside my field. Sometimes, the non-software and non-business related stuff give me fresh ideas. Other times, they outright inspire me in ways I did not expect. For example, a particular thread by Nick Maggiulli inspired me to pursue publishing a page every week.

What I am going to do is to write out the three things I learn from his thread and that I’ll be experimenting with in the next few months.

Lesson 1: Just Publish Every Week

Nick is currently at post number 104. Which means he has published a post for 104 consecutive weeks. That’s some consistency. Another writer I admire is Tren Griffin from Microsoft. He has now written at least 1 post per week for over 200 consecutive weeks on his blog 25iq. This is something I will be focusing on in this blog starting with the first week of 2019.

Lesson 2: Your Friends and Family won’t care. That’s fine.

I have always cared a bit too much about how other people think. The approach I will take is to accept that most of the people in my life currently won’t care what I write because that’s not what they are interested in.

Tiago Forte, another person I follow but works outside my field, has a similar point. Which is why he writes about getting new readers into your blog by simply asking new people you meet to add to your email list.

He also expects these people to unsubscribe if it’s not what they want in their life. Those who are interested will stay and that’s how he slowly built up an audience.

But, first things first. I’m not going to expect the people in my life to care what I write in this blog by default.

Lesson 3: Reach out to your heroes

Nick put in a shift and wrote a quality post. However, like the proverbial tree in the forest with no one to hear, his post would be as good as non-existent if he didn’t get it out.

Therefore, he emailed one of his heroes, Jason Zweig directly. The fact that Jason subsequently tweeted it out is good feedback that Nick wrote something of a sufficient quality.

I will go as far to say, even if Jason didn’t respond, that’s also good feedback to work on his writing further.

And this wasn’t the only time Nick reached out to his heroes and benefitted from the reaching out. The advice he received also helped him get over the hump when he got stuck.

Bonus Lesson : Occasional Hits and Long Stretches of Slow Growth in between

I cannot recommend enough to read the whole thread. In it, you get an impression that Nick probably had long stretches of slow or non growth in between posts that struck gold. I’m going to end this post with this bonus lesson. All the lessons:

  1. Publish every week
  2. It’s fine when family and friends don’t read
  3. Reach out to your heroes

They work as a stack. Lesson 3 doesn’t work if I don’t appreciate Lesson 2, and get past how most people won’t read my blog. Both lessons don’t matter if I don’t even publish every week in the first place.

This concludes my first published page for 2019. I have written other stuff before this. Regardless, for purely emotional reasons, I will classify this as my post #1. As in the first post since I devote myself to publishing every week.

Photo by Charles Deluvio 🇵🇭🇨🇦 on Unsplash