Cheers 🍻 to those who owns a surface pro x and tries to do some development work on it.
By now, most is probably regretting why they didn’t buy a normal non-ARM surface pro or just an M1. Although wsl2 is very nice, it is not really mature enough. It has some nasty bugs such as slow network and slow filesystem, which leads to a big pain when development code is running on WSL while being accessed from a browser or IDE in windows.
I love JavaScript, however very often, it doesn’t love me back.
const dataBuffers = new Array(4); dataBuffers.fill([]); for (let i = 0; i < 4; i += 1) { dataBuffers[i].push(7); } console.log(JSON.stringify(dataBuffers)); // guess what is the answer? If you guess the answer would be [[7],[7],[7],[7]], then think again.
The problem is that dataBuffers.fill([]) will create you a single array, and fills the dataBuffers with the reference of the array. It is actually pretty clear if you have done JavaScript or any language long enough, but one might be stuck with the impression that the dataBuffers is filled with an empty array.
If you are using the java.lang.Currency as part of your entity, most likely you are already regretting it now. Most of the time, a currency is a three letter word, and we don’t need the logic to be bundled in there. Keeping things simple and mapping these currencies as a String or own custom class is the way to go.
For people who did use java.lang.Currency, the pain will arrive when a country changed its currency or added a new one.
Around last week, I was listening to Rev. Stephen Tong’s Masterclass (basically like a Seminar). He said something which really struck me deeply. It’s something along this line:
Woe to you who wants to become a leader because you want people to serve you! Woe to you if you want to be a leader because you think you will have many people helping you!
A leader is the one who serves all his people!
Why do we need it? # In the early stages of the development, event driven architecture is not needed since everything is still small and could still be contained in one place. Unfortunately, as the system grows bigger and bigger, putting everything in one place is not a very good way to scale (a.k.a monolith). The logic of the whole system will also keep increasing in complexity, and eventually there will arise the need to separate the giant system into domains.
Exporting spring boot metrics to cloudwatch may not be that straightforward. Since I’ve done that for my project, I will show how we can do it easily with the help of a couple of libraries.
First, spring boot is using its own metrics mechanism. The first step to do that is to swap it with dropwizard’s metrics library which is more powerful and has more feature. Thankfully, the actuator has complete support for the library from what it’s written in the documentation.
Anyone who used docker before and care about quality will have to deal with the logs problem. As we all know, docker runs in a container, and if we don’t handle the logs in a special way, the logs will be lost when the container is restarted.
In order to deal with this, usually we will map the logs folder of our application to the parent host of the container, so that the logs don’t get lost when the container is restarted or changed.
So after a few dist-upgrades, docker is not running anymore in my debian Jessie. A quick run of sudo service docker status shows me:
● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2015-09-02 00:03:18 CEST; 8min ago Docs: https://docs.docker.com Process: 22770 ExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE) Main PID: 22770 (code=exited, status=1/FAILURE) Sep 02 00:03:18 rowanto-mdeb systemd[1]: Started Docker Application Container Engine.
I use Debian Jessie, and I am the type who do this command frequently:
sudo apt-get update && sudo apt-get dist-upgrade Now I am suffering from the consequences. The desktop is completely broken. I googled around, and there was nothing. So I thought something in my laptop is broken at first.
Well, if you are a long enough linux user, this kind of stuff is kind of normal. It always happens once in a blue moon.
So for those trying to setup HDFS out there, and are struggling with this kind of error where it said datanode denied communication with namenode:
2015-05-07 08:04:19,694 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1184863888-172.17.0.3-1430984962919 (Datanode Uuid null) service to dockernamenode.zanox.com/172.17.0.3:8020 beginning handshake with NN 2015-05-07 08:04:19,697 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1184863888-172.17.0.3-1430984962919 (Datanode Uuid null) service to dockernamenode.zanox.com/172.17.0.3:8020 Datanode denied communication with namenode because hostname cannot be resolved (ip=172.17.0.5, hostname=172.17.0.5): DatanodeRegistration(0.