!!! MOVED here https://gitlab.com/baptiste-dauphin/doc !!!
- System
- Network
- Security
- Software
- Databases
- Hardware
- Virtualization
- Kubernetes
- Provider
- CentOS
- ArchLinux
- Miscellaneous
- Definitions
- Media / Platform
- DevOps
useradd -m -s /bin/bash b.dauphin
-m create home dir
-s shell path
echo 'root:toto' | chpasswd
or get prompt for changing your current user passwd
# prompt...
switch to a user (default root)
su -
su - b.dauphin
In ordre to edit sudoer file, use the proper tool visudo
. Because even for root
the file is readonly
visudo -f /var/tmp/sudoers.new
visudo -f /etc/sudoers
visudo -c
/etc/sudoers: parsed OK
/etc/sudoers.d/dev: parsed OK
visudo -f /etc/sudoers.d/qwbind-dev -c
/etc/sudoers.d/qwbind-dev: parsed OK
Add user baptiste to sudoer
usermod -aG sudo baptiste
usermod -aG wireshark b.dauphin
apt update
apt-cache search sendmail
apt-cache search --names-only 'icedtea?'
apt depends sendmail
apt-get clean
htop
nload
free -g
To sort by memory usage we can use either %MEM or RSS columns.
RSS
Resident Set Size is a total memory usage in kilobytes%RAM
shows the same information in terms of percent usage of total memory amount available.
ps aux --sort=+rss
ps aux --sort=%mem
Empty swap
swapoff -a && swapon -a
How to read memory usage in htop?
htop
- Hide
user
threadsshift + H
- Hide
kernel
threadsshift + K
- close the process tree view
F5
- then you can sort out the process of your interest by PID and read the RES column
- sort by MEM% by pressing
shift + M
, orF3
to search in cmd line)
grep MemTotal /proc/meminfo | awk '{print $2}'
grep MemTotal /proc/meminfo | awk '{print $2}' | xargs -I {} echo "scale=4; {}/1024^1" | bc
grep MemTotal /proc/meminfo | awk '{print $2}' | xargs -I {} echo "scale=4; {}/1024^2" | bc
available to the current process (may be less than all online)
nproc
all online
nproc --all
old fashion version
grep -c ^processor /proc/cpuinfo
Default system software (Debian)
update-alternatives - maintain symbolic links determining default commands
List existing selections and list the one you wanna see
update-alternatives --get-selections
update-alternatives --list x-www-browser
Modify existing selection interactively
sudo update-alternatives --config x-terminal-emulator
Create a new selection
update-alternatives --install /usr/bin/x-window-manager x-window-manager /usr/bin/i3 20
Change default terminal or browser will prompt you an interactive console to chose among recognized software
sudo update-alternatives --config x-terminal-emulator
sudo update-alternatives --config x-www-browser
- Graphic server (often X11, Xorg, or just X, it's the same software)
- Display Manager (SDDM, lightDM, gnome)
- Windows Manager (i3-wm, gnome)
Traduit de l'anglais-Simple Desktop Display Manager est un gestionnaire dโaffichage pour les systรจmes de fenรชtrage X11 et Wayland. SDDM a รฉtรฉ รฉcrit ร partir de zรฉro en C ++ 11 et supporte la thรฉmatisation via QML
service sddm status
service sddm restart : restart sddm (to load new monitor)
update-alternatives --install /usr/bin/x-window-manager x-window-manager /usr/bin/i3 20
https://i3wm.org/docs/userguide.html#_automatically_starting_applications_on_i3_startup
The >
operator redirects the output usually to a file but it can be to a device. You can also use >>
to append.
If you don't specify a number then the standard output stream is assumed but you can also redirect errors
>
file redirects stdout to file1>
file redirects stdout to file2>
file redirects stderr to file&>
file redirects stdout and stderr to file
/dev/null is the null device it takes any input you want and throws it away. It can be used to suppress any output.
is there a difference between > /dev/null 2>&1
and &> /dev/null
?
&> is new in Bash 4, the former is just the traditional way, I am just so used to it (easy to remember)
remove some characters ( and ) if found
.. | tr -d '()'
tar --help
Command | meaning |
---|---|
-c | create (name your file .tar) |
-(c)z | archive type gzip (name your file .tar.gz) |
-(c)j | archive type bzip2 |
-x | extract |
-f | file |
-v | verbose |
-C | Set dir name to extract files |
--directory | same |
Default, extract file to STOUT
-c : write on standard output, keep original files unchanged
gunzip -c file.gz > file
Start at the end of a file
- will run an initial command when the file is opened G jumps to the end
less +G app.log
Stream editor
Cmd | meaning |
---|---|
sed -n | silent mode (default behaviour) |
sed -n | silent mode. By default print nothing. Use with /p to print interesting cmd |
sed -i | agit non pas sur l'input stream mais sur le fichier specifiรฉ |
sed -f script_file | Take instruction from script |
Example
Replace patern 1
by patern 2
sed -i 's/patern 1/patern 2/g' /etc/ssh/sshd_config
replace Not after
by nothing from the input stream
... | sed -n 's/ *Not After : *//p'
cmd | meaning |
---|---|
sed '342d' -i ~/.ssh/known_hosts | remove 342th line of file |
sed '342,342d' -i ~/.ssh/known_hosts | remove 342th to 342th line, equivalent to precedent cmd |
sed -i '1,42d' -i test.sql | remove first 42 lines of test.sql |
common usage
find . -maxdepth 1 -type l -ls
find /opt -type f -mmin -5 -exec ls -ltr {} +
find /var/log/nginx -type f -name "*access*" -mmin +5 -exec ls -ltr {} +
find . -type f -mmin -5 -print0 | xargs -0 /bin/ls -ltr
cmd | meaning |
---|---|
find -mtime n | last DATA MODIFICATION time (day) |
find -atime n | last ACCESS time (day) |
find -ctime n | last STATUS MODIFICATION time (day) |
"Modify" is the timestamp of the last time the file's content has been mofified. This is often called "mtime".
"Change" is the timestamp of the last time the file's inode has been changed, like by changing permissions, ownership, file name, number of hard links. It's often called "ctime".
list in the current directory, all files last modifed more (+10) than 10 days ago, historical order list in the current directory, all files last modifed less (-10) than 10 days ago, historical order
find . -type f -mtime +10 -exec ls -ltr {} +
find . -type f -mtime -10 -exec ls -ltr {} +
list files with last modified date of LESS than 5 minutes
find . -type f -mmin -5 -exec ls -ltr {} +
xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or newlines, and executes the command (default is /bin/echo) one or more times with any initial-arguments followed by items read from standard input. Blank lines on the standard input are ignored.
You can defined the name of the received arg (from stdin). In the following example the chosen name is %
.
The following example : takes all the .log files and mv them into a directory named 'working_sheet_of_the_day'
ls *.log | xargs -I % mv % ./working_sheet_of_the_day
compress
tar zfcv myfiles.tar.gz /dir1 /dir2 /dir3
extract in a given directory
tar zxvf somefilename.tar.gz or .tgz
tar jxvf somefilename.tar.bz2
tar xf file.tar -C /path/to/directory
Command | meaning |
---|---|
file | get meta info about that file |
tail -n 15 -f | print content of file begining by end, for n lines, with keep following new files entries |
head -n 15 | print content of a file begining by begining |
who | info about connected users |
w | same with more info |
wall | print on all TTY (for all connected user) |
sudo updatedb | update the local database of the files present in the filesystem |
locate file_name | Search into this databases |
echo app.$(date +%Y_%m_%d) | print a string based on subshell return |
touch app.$(date +%Y_%m_%d) | create empty file named on string based on subshell return |
mkdir app.$(date +%Y_%m_%d) | create directory named on string based on subshell return |
sh | run a 'sh' shell, very old shell |
bash | run a 'bash' shell, classic shell of debian 7,8,9 |
zsh | run a 'zsh' shell, new shell |
for i in google.com free.fr wikipedia.de ; do dig $i +short ; done |
Operator | Description |
---|---|
! EXPRESSION | The EXPRESSION is false. |
-n STRING | The length of STRING is greater than zero. |
-z STRING | The lengh of STRING is zero (ie it is empty). |
STRING1 = STRING2 | STRING1 is equal to STRING2 |
STRING1 != STRING2 | STRING1 is not equal to STRING2 |
INTEGER1 -eq INTEGER2 | INTEGER1 is numerically equal to INTEGER2 |
INTEGER1 -gt INTEGER2 | INTEGER1 is numerically greater than INTEGER2 |
INTEGER1 -lt INTEGER2 | INTEGER1 is numerically less than INTEGER2 |
-d FILE | FILE exists and is a directory. |
-e FILE | FILE exists. |
-f FILE | True if file exists AND is a regular file. |
-r FILE | FILE exists and the read permission is granted. |
-s FILE | FILE exists and its size is greater than zero (ie. it is not empty). |
-w FILE | FILE exists and the write permission is granted. |
-x FILE | FILE exists and the execute permission is granted. |
-eq 0 | COMMAND result equal to 0 |
$? | last exit code |
$# | Number of parameters |
$@ | expands to all the parameters |
if [ -f /tmp/test.txt ];
then
echo "true";
else
echo "false";
fi
$ true && echo howdy!
howdy!
$ false || echo howdy!
howdy!
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
DIR="$(dirname "$0")"
for i in `seq 1 6`
do
mysql -h 127.0.0.1 -u user -p password -e "show variables like 'server_id'; select user()"
done
...
is the legacy syntax required by only the very oldest of non-POSIX-compatible bourne-shells. There are several reasons to always prefer the $(...) syntax:
$ echo "`echo \\a`" "$(echo \\a)"
a \a
$ echo "`echo \\\\a`" "$(echo \\\\a)"
\a \\a
# Note that this is true for *single quotes* too!
$ foo=`echo '\\'`; bar=$(echo '\\'); echo "foo is $foo, bar is $bar"
foo is \, bar is \\
echo "x is $(sed ... <<<"$y")"
In this example, the quotes around
echo "x is `sed ... <<<\"$y\"`"
x=$(grep "$(dirname "$path")" file)
x=`grep "\`dirname \"$path\"\`" file`
Be very careful to the context of their definition
set variable to current shell
export http_proxy=http://10.10.10.10:9999
echo $http_proxy
should print the value
set variables only for the current line execution
http_proxy=http://10.10.10.10:9999 wget -O - https://repo.saltstack.com/apt/debian/9/amd64/latest/SALTSTACK-GPG-KEY.pub
echo $http_proxy
will return nothing because it doesn't exist anymore
Export multiple env var
export {http,https,ftp}_proxy="http://10.10.10.10:9999"
Useful common usage
export http_proxy=http://10.10.10.10:9999/
export https_proxy=$http_proxy
export ftp_proxy=$http_proxy
export rsync_proxy=$http_proxy
export no_proxy="localhost,127.0.0.1,localaddress,.localdomain.com"
Remove variable
unset http_proxy
unset http_proxy unset https_proxy unset HTTP_PROXY unset HTTPS_PROXY unset
debian style
ps -ef
ps -o pid,user,%mem,command ax
Get parent pid of a given pid
ps -o ppid= -p 750
ps -o ppid= -p $(pidof systemd)
RedHat style
ps aux
kill default TERM
kill -l list all signals
kill -l 15 get name of signal
kill -s TERM PID
kill -TERM PID
kill -15 PID
shortcut | meaning |
---|---|
ctrl + \ | SIGQUIT |
ctrl + C | SIGINT |
Number | Name (short name) | Description Used for |
---|---|---|
0 SIGNULL (NULL) | Null | Check access to pid |
1 SIGHUP (HUP) | Hangup Terminate | can be trapped |
2 SIGINT (INT) | Interrupt Terminate | can be trapped |
3 SIGQUIT (QUIT) | Quit Terminate with core dump | can be trapped |
9 SIGKILL (KILL) | Kill Forced termination | cannot be trapped |
15 SIGTERM (TERM) | Terminate Terminate | can be trapped |
24 SIGSTOP (STOP) | Stop Pause the process | cannot be trapped. This is default if signal not provided to kill command. |
25 SIGTSTP (STP) | Stop/pause the process | can be trapped |
26 SIGCONT (CONT) | Continue | Run a stopped process |
xeyes &
jobs -l
kill -s STOP 3405
jobs -l
kill -s CONT 3405
jobs -l
kill -s TERM 3405
list every running process
ps -ef | grep ssh-agent | awk '{print $2}'
ps -ef | grep ssh-agent | awk '$0=$2'
Print only the process IDs of syslogd:
ps -C syslogd -o pid=
Print only the name of PID 42:
ps -q 42 -o comm=
To see every process running as root (real & effective ID) in user format:
ps -U root -u root u
Get PID (process Identifier) of a running process
pidof iceweasel
pgrep ssh-agent
diff <(cat /etc/passwd) <(cut -f2 /etc/passwd)
<(...) is called process substitution. It converts the output of a command into a file-like object that diff can read from. While process substitution is not POSIX, it is supported by bash, ksh, and zsh.
User's IPC shared memory, semaphores, and message queues
Type of IPC object. Possible values are:
q -- message queue
m -- shared memory
s -- semaphore
USERNAME=$1
TYPE=$2
ipcs -$TYPE | grep $USERNAME | awk ' { print $2 } ' | xargs -I {} ipcrm -$TYPE {}
ipcs -s | grep zabbix | awk ' { print $2 } ' | xargs -I {} ipcrm -s {}
Unix File types
Description | symbol |
---|---|
Regular file | - |
Directory | d |
Special files | (5 sub types in it) |
block file | b |
Character device file | c |
Named pipe file or just a pipe file | p |
Symbolic link file | l |
Socket file | s |
df -h
du -sh --exclude=relative/path/to/uploads --exclude other/path/to/exclude
du -hsx --exclude=/{proc,sys,dev} /*
lsblk
list physical disk and then, mount them of your filesystem
lsblk
fdisk -l
sudo mount /dev/sdb1 /mnt/usb
awk '$4~/(^|,)ro($|,)/' /proc/mounts
umount /mnt
you do so, you will get the โumount: /mnt: device is busy.โ error as shown below.
umount /mnt
umount: /mnt: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
Use fuser command to find out which process is accessing the device along with the user name.
fuser -mu /mnt/
/mnt/: 2677c(sathiya)
- fuser โ command used to identify processes using the files / directories
- -m โ specify the directory or block device along with this, which will list all the processes using it.
-u โ shows the owner of the process
You got two choice here.
- Ask the owner of the process to properly terminate it or
- You can kill the process with super user privileges and unmount the device.
When you cannot wait to properly umount a busy device, use umount -f as shown below.
umount -f /mnt
If it still doesnโt work, lazy unmount should do the trick. Use umount -l as shown below.
umount -l /mnt
When lost remote access to machine.
Reboot the system
press e
to edit grub
After editing grub, add this at the end of linux line
init=/bin/bash
grub config extract
menuentry 'Debian GNU/Linux, with Linux 4.9.0-8-amd64 {
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod ext2
...
...
...
echo 'Loading Linux 4.9.0-8-amd64 ...'
linux /vmlinuz-4.9.0-8-amd64 root=/dev/mapper/debian--baptiste--vg-root ro quiet
echo 'Loading initial ramdisk ...'
initrd /initrd.img-4.9.0-8-amd64
}
Change this line
linux /vmlinuz-4.9.0-8-amd64 root=/dev/mapper/debian--baptiste--vg-root ro quiet
into this
linux /vmlinuz-4.9.0-8-amd64 root=/dev/mapper/debian--baptiste--vg-root rw quiet init=/bin/bash
F10 to boot with the current config
Make writable the root filesystem (useless if you switched 'ro' into 'rw')
mount -n -o remount,rw /
Make your modifications
passwd user_you_want_to_modify
# or
vim /etc/iptables/rules.v4
to exit the prompt and reboot the computer.
exec /sbin/init
fsck.ext4 /dev/mapper/vg_data-lv_data
e2fsck 1.43.4 (31-Jan-2017)
/dev/mapper/VgData-LvData contient un systรจme de fichiers comportant des erreurs, vรฉrification forcรฉe.
Passeย 1ย : vรฉrification des i-noeuds, des blocs et des tailles
Passe 2ย : vรฉrification de la structure des rรฉpertoires
Passeย 3ย : vรฉrification de la connectivitรฉ des rรฉpertoires
Passe 4ย : vรฉrification des compteurs de rรฉfรฉrence
Passe 5ย : vรฉrification de l information du sommaire de groupe
ln -sfTv /opt/app_$TAG /opt/app_current
List open file, filter by deleted
Very useful when you have incoherence between result of df -h
and du -sh /*
It may happens that you remove a file, but another process file descriptor is still using it. So, view from the filesystem, space is not released/free
lsof -nP | grep '(deleted)'
Old System control replaced by Systemd since debian 8
aka SystemV, aka old fashioned way, prefered by some people due to full control provided by a on directly modifiable bash script located under /etc/init.d/
usage
service rsyslog status
change process management
vim /etc/init.d/rsyslog
Introduced since debian 8
Based on internal and templated management. The only way to interact with systemd is by modifying instructions (but not directly code) on service file
.
The can be located under different directories.
Where are Systemd Unit Files Found?
The files that define how systemd will handle a unit can be found in many different locations, each of which have different priorities and implications.
The systemโs copy of unit files are generally kept in the /lib/systemd/system directory. When software installs unit files on the system, this is the location where they are placed by default.
Unit files stored here are able to be started and stopped on-demand during a session. This will be the generic, vanilla unit file, often written by the upstream projectโs maintainers that should work on any system that deploys systemd in its standard implementation. You should not edit files in this directory. Instead you should override the file, if necessary, using another unit file location which will supersede the file in this location.
If you wish to modify the way that a unit functions, the best location to do so is within the /etc/systemd/system directory. Unit files found in this directory location take precedence over any of the other locations on the filesystem. If you need to modify the systemโs copy of a unit file, putting a replacement in this directory is the safest and most flexible way to do this.
If you wish to override only specific directives from the systemโs unit file, you can actually provide unit file snippets within a subdirectory. These will append or modify the directives of the systemโs copy, allowing you to specify only the options you want to change.
The correct way to do this is to create a directory named after the unit file with .d appended on the end. So for a unit called example.service, a subdirectory called example.service.d could be created. Within this directory a file ending with .conf can be used to override or extend the attributes of the systemโs unit file.
There is also a location for run-time unit definitions at /run/systemd/system. Unit files found in this directory have a priority landing between those in /etc/systemd/system and /lib/systemd/system. Files in this location are given less weight than the former location, but more weight than the latter.
The systemd process itself uses this location for dynamically created unit files created at runtime. This directory can be used to change the systemโs unit behavior for the duration of the session. All changes made in this directory will be lost when the server is rebooted.
Resume
Location | override/supersede priority (higher takes precedence) | Meaning |
---|---|---|
/run/systemd/system | 1 | Run-time only, lost after systemd reboot |
/etc/systemd/system directory | 2 | SysAdmin maintained |
/lib/systemd/system directory | 3 | Packages vendor maintained (apt, rpm, pacman, ...) |
# show all installed unit files
systemctl list-unit-files --type=service
# loaded
systemctl list-units --type=service --state=loaded
# active
systemctl list-units --type=service --state=active
# running
systemctl list-units --type=service --state=running
# show a specific property (service var value)
systemctl show --property=Environment docker
# print all content
systemctl show docker --no-pager | grep proxy
syslog-ng is a syslog implementation which can take log messages from sources and forward them to destinations, based on powerful filter directives.
Note: With systemd's journal
(journalctl), syslog-ng is not needed
by most users.
If you wish to use both the journald and syslog-ng files, ensure the following settings are in effect. For systemd-journald, in the /etc/systemd/journald.conf
file, Storage=
either set to auto or unset (which defaults to auto) and ForwardToSyslog=
set to no or unset (defaults to no). For /etc/syslog-ng/syslog-ng.conf
, you need the following source
stanza:
source src {
# syslog-ng
internal();
# systemd-journald
system();
};
A very good overview, official doc Still a very good ArchLinux tutorial
Starting with syslog-ng version 3.6.1 the default system()
source on Linux systems using systemd uses journald
as its standard system()
source.
Typically
-
systemd-journald
- stores message from unit that it manages
sshd.service
- unit.{service,slice,socket,scope,path,timer,mount,device,swap}
- stores message from unit that it manages
-
syslog-ng
- read INPUT message from
systemd-journald
- write OUTPUT various files under
/var/log/*
- read INPUT message from
Examples from default config:
log { source(s_src); filter(f_auth); destination(d_auth); };
log { source(s_src); filter(f_cron); destination(d_cron); };
log { source(s_src); filter(f_daemon); destination(d_daemon); };
log { source(s_src); filter(f_kern); destination(d_kern); };
journalctl is a command for viewing logs collected by systemd. The systemd-journald service is responsible for systemdโs log collection, and it retrieves messages from the kernel, systemd services, and other sources.
These logs are gathered in a central location, which makes them easy to review. The log records in the journal are structured and indexed, and as a result journalctl is able to present your log information in a variety of useful formats.
journalctl
journalctl -r
Each line starts with the date (in the serverโs local time), followed by the serverโs hostname, the process name, and the message for the log
journalctl --priority=0..3 --since "12 hours ago"
-u --unit=UNIT
- --user-unit=UNIT --no-pager --list-boots -b --boot[=ID] -e --pager-end -f --follow -p --priority=RANGE
0: emerg
1: alert
2: crit
3: err
4: warning
5: notice
6: info
7: debug
Key command | Action |
---|---|
down arrow key, enter, e, or j | Move down one line. |
up arrow key, y, or k | Move up one line. |
space bar | Move down one page. |
b | Move up one page. |
right arrow key | Scroll horizontally to the right. |
left arrow key | Scroll horizontally to the left. |
g | Go to the first line. |
G | Go to the last line. |
10g | Go to the 10th line. Enter a different number to go to other lines. |
50p or 50% | Go to the line half-way through the output. Enter a different number to go to other percentage positions. |
/search term | Search forward from the current position for the search term string. |
?search term | Search backward from the current position for the search term string. |
n | When searching, go to the next occurrence. |
N | When searching, go to the previous occurrence. |
m | Set a mark, which saves your current position. Enter a single character in place of to label the mark with that character. |
' | Return to a mark, where is the single character label for the mark. Note that ' is the single-quote. |
q | Quit less |
journalctl --no-pager
Itโs not recommended that you do this without first filtering down the number of logs shown.
journalctl --since "2018-08-30 14:10:10"
journalctl --until "2018-09-02 12:05:50"
journalctl --list-boots
journalctl -b -2
journalctl -b
journalctl -u ssh
journalctl -k
Format Name | Description |
---|---|
short | The default option, displays logs in the traditional syslog format. |
verbose | Displays all information in the log record structure. |
json | Displays logs in JSON format, with one log per line. |
json-pretty | Displays logs in JSON format across multiple lines for better readability. |
cat | Displays only the message from each log without any other metadata. |
journalctl -o json-pretty
systemd-journald can be configured to persist your systemd logs on disk, and it also provides controls to manage the total size of your archived logs. These settings are defined in /etc/systemd/journald.conf To start persisting your logs, uncomment the Storage line in /etc/systemd/journald.conf and set its value to persistent. Your archived logs will be held in /var/log/journal. If this directory does not already exist in your file system, systemd-journald will create it.
systemctl restart systemd-journald
The following settings in journald.conf control how large your logsโ size can grow to when persisted on disk:
Setting | Description |
---|---|
SystemMaxUse | The total maximum disk space that can be used for your logs. |
SystemKeepFree | The minimum amount of disk space that should be kept free for uses outside of systemd-journaldโs logging functions. |
SystemMaxFileSize | The maximum size of an individual journal file. |
SystemMaxFiles | The maximum number of journal files that can be kept on disk. |
systemd-journald will respect both SystemMaxUse and SystemKeepFree, and it will set your journalsโ disk usage to meet whichever setting results in a smaller size.
journalctl -u systemd-journald
journalctl --disk-usage
journalctl --verify
journalctl offers functions for immediately removing archived journals on disk. Run journalctl with the --vacuum-size option to remove archived journal files until the total size of your journals is less than the specified amount. For example, the following command will reduce the size of your journals to 2GiB:
journalctl --vacuum-size=2G
Run journalctl with the --vacuum-time option to remove archived journal files with dates older than the specified relative time. For example, the following command will remove journals older than one year:
journalctl --vacuum-time=1years
####ย Logger To write into the journal
logger -n syslog.baptiste-dauphin.com --rfc3164 --tcp -P 514 -t 'php95.8-fpm' -p local7.error 'php-fpm error test'
logger -n syslog.baptiste-dauphin.com --rfc3164 --udp -P 514 -t 'sshd' -p local7.info 'sshd error : test '
logger -n syslog.baptiste-dauphin.com --rfc3164 --udp -P 514 -t 'sshd' -p auth.info 'sshd error : test'
for ((i=0; i < 10; ++i)); do logger -n syslog.baptiste-dauphin.com --rfc3164 --tcp -P 514 -t 'php95.8-fpm' -p local7.error 'php-fpm error test' ; done
salt -C 'G@app:api and G@env:production and G@client:mattrunks' \
cmd.run "for ((i=0; i < 10; ++i)); do logger -n syslog.baptiste-dauphin.com --rfc3164 --tcp -P 514 -t 'php95.8-fpm' -p local7.error 'php-fpm error test' ; done" \
shell=/bin/bash
logger '@cim: {"name1":"value1", "name2":"value2"}'
Some good explanations ArchLinux iptables good explanations
iptables-save
iptables-save > /etc/iptables/rules.v4
iptables -L
iptables -nvL
iptables -nvL INPUT
iptables -nvL OUTPUT
iptables -nvL PREROUTING
The Default linux iptables chain policy is ACCEPT for all INPUT, FORWARD and OUTPUT policies. You can easily change this default policy to DROP with below listed commands.
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
iptables --policy INPUT DROP
iptables -P chain target [options] --policy -P chain target
--append -A chain Append to chain
--check -C chain Check for the existence of a rule
--delete -D chain Delete matching rule from chain
iptables --list Print rules in human readable format
iptables --list-rules Print rules in iptables readable format
iptables -v -L -n
iptables -A OUTPUT -d 10.10.10.10/32 -p tcp -m state --state NEW -m tcp --match multiport --dports 4506:10000 -j ACCEPT
iptables -t raw -I PREROUTING -j NOTRACK
iptables -t raw -I OUTPUT -j NOTRACK
iptables -A INPUT -j LOG --log-prefix "INPUT:DROP:" --log-level 6
iptables -A INPUT -j DROP
iptables -P INPUT DROP
iptables -A OUTPUT -j LOG --log-prefix "OUTPUT:DROP:" --log-level 6
iptables -A OUTPUT -j DROP
iptables -P OUTPUT DROP
you have to temporarily REMOVE log and drop last lines, otherwise, your new line
iptables -D INPUT -j LOG --log-prefix "INPUT:DROP:" --log-level 6
iptables -D INPUT -j DROP
iptables -A INPUT -p udp -m udp --sport 123 -j ACCEPT
iptables -A INPUT -j LOG --log-prefix "INPUT:DROP:" --log-level 6
iptables -A INPUT -j DROP
debian 8 and under, get info about connection tracking. Current and max
cat /proc/sys/net/netfilter/nf_conntrack_count
cat /proc/sys/net/netfilter/nf_conntrack_max
debian 9, with a wrapper, easier to use !
conntrack -L [table] [options] [-z]
conntrack -G [table] parameters
conntrack -D [table] parameters
conntrack -I [table] parameters
conntrack -U [table] parameters
conntrack -E [table] [options]
conntrack -F [table]
conntrack -C [table]
conntrack -S
for those binaries : ifconfig, netstat, rarp, route, ip, dig
apt install net-tools iproute2 dnsutils
Command | meaning |
---|---|
ip a | get IP of the system |
ip r | get routes of the system |
ip route change default via 99.99.99.99 dev ens8 proto dhcp metric 100 | modify default route |
ip addr add 88.88.88.88/32 dev ens4 | add (failover) IP to a NIC |
new ubuntu network manager
cat /{lib,etc,run}/netplan/*.yaml
(old way)
command | specification |
---|---|
netstat -t | list tcp connections |
netstat -lt | list listening tcp socket |
netstat -lu | list listening udp socket |
netstat -ltu | list listening udp + tcp socket |
netstat -lx | list listening unix socket |
netstat -ltup | same as above, with info on process |
netstat -ltupn | p(PID), l(LISTEN), t(tcp), n(Convert names) |
netstat -ltpa | all = ESTABLISHED (default) LISTEN |
netstat -lapute | classic useful usage |
netstat -salope | same |
netstat -tupac | same |
(new quicker way)
command | specification |
---|---|
ss -tulipe | more info on listening process |
ss tlpn | print listen tcp socket with process |
ss -ltpn sport eq 2377
ss -t '( sport = :ssh )'
ss -ltn sport gt 500
ss -ltn sport le 500
Real time, just see whatโs going on, by looking at all interfaces.
tcpdump -i any -w capturefile.pcap
tcpdump port 80 -w capture_file
tcpdump 'tcp[32:4] = 0x47455420'
tcpdump -n dst host ip
tcpdump -vv -i any port 514
tcpdump -i any -XXXvvv src net 10.0.0.0/8 and dst port 1234 or dst port 4321 | ccze -A
tcpdump -i any port not ssh and port not domain and port not zabbix-agent | ccze -A
https://danielmiessler.com/study/tcpdump/
tcpdump -i lo udp port 123 -vv -X
tcpdump -vv -x -X -s 1500 -i any 'port 25' | ccze -A
https://danielmiessler.com/study/tcpdump/#source-destination
tcpflow -c port 443
lsof -Pan -p $PID -i
# ss version
ss -l -p -n | grep ",1234,"
debian 9 new network management style
vim /etc/systemd/network/50-default.network
systemctl status systemd-networkd
systemctl restart systemd-networkd
old fashioned network management style
vlan tagging and route add
auto enp61s0f1.3200
iface enp61s0f1.3200 inet static
address 10.10.10.20/22
vlan-raw-device enp61s0f1
post-up ip route add 10.0.0.0/8 via 10.10.10.254
# with package "ifupdown"
auto eth0
iface eth0 inet static
address 192.0.2.7/30
gateway 192.0.2.254
Activate NAT (Network Address Translation)
iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o eth0 -j MASQUERADE
With OpenVPN
Run OpenVpn client in background, immune to hangups, with output to a non-tty
cd /home/baptiste/.openvpn && \
nohup sudo openvpn /home/baptiste/.openvpn/[email protected]
Netcat (network catch) TCP/IP swiss army knife
nc -l 127.0.0.1 -p 80
nc -lvup 514
# listen all ip on tcp port 443
nc -lvtp 443
only for TCP (obviously), UDP is not connected protocol
nc -znv 10.10.10.10 3306
echo '<187>Apr 29 15:26:16 qwarch plop[12458]: baptiste' | nc -u 10.10.10.10 1514
Display your public AND private keys from the gpg-agent keyring
gpg --list-keys
gpg --list-secret-keys
How to generate gpg public/private key pair
vault login -method=ldap username=$USER
Will set up a token under ~/.vault-token
by default ssh
reads stdin
. When ssh is run in the background or in a script we need to redirect /dev/null into stdin.
Here is what we can do.
ssh shadows.cs.hut.fi "uname -a" < /dev/null
ssh -n shadows.cs.hut.fi "uname -a"
Will generate an output file containing 1 IP / line
for minion in minion1 minion2 database_dev random_id debian minion3 \
; do ipam $minion | tail -n 1 | awk '{print $1}' \
>> minions.list \
; done
Run parallelized exit
after a test of a ssh connection
while read minion_ip; do
(ssh -n $minion_ip exit \
&& echo Success \
|| echo CONNECTION_ERROR) &
done <minions.list
Test sshd config before reloading (avoid fail on restart/reload and cutting our own hand)
sshd = ssh daemon
sshd -t
Test connection to multiple servers
for outscale_instance in 10.10.10.1 10.10.10.2 10.10.10.3 10.10.10.4 \
; do ssh $outscale_instance -q exit \
&& echo "$outscale_instance :" connection succeed \
|| echo "$outscale_instance :" connection failed \
; done
10.10.10.1 : connection succeed
10.10.10.2 : connection succeed
10.10.10.3 : connection failed
10.10.10.4 : connection succeed
quickly copy your ssh public key to a remote server
cat ~/.ssh/id_ed25519.pub | ssh [email protected] "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys"
-a
: archive mode
-u
: update mode, not full copy
rsync -au --progress -e "ssh -i path/to/private_key" [email protected]:~/remote_path /output/path
Keywork | meaning |
---|---|
SSL | |
TLS | |
Private key | |
Public key | |
RSA | |
ECDSA |
openssl s_client -connect www.qwant.com:443 -servername www.qwant.com < /dev/null | openssl x509 -text
openssl s_client -connect qwant.com:443 -servername qwant.com < /dev/null | openssl x509 -noout -fingerprint
openssl s_client -connect qwantjunior.fr:443 -servername qwantjunior.fr < /dev/null | openssl x509 -text -noout -dates
Useful use case
openssl x509 --text --noout --in ./dev.bdauphin.io.pem -subject -issuer
(.pem)
openssl x509 --text --noout --in /etc/ssl/private/sub.domain.tld.pem
# debian 7, openssl style
openssl x509 -text -in /etc/ssl/private/sub.domain.tld.pem
OpenSSL verify with -CAfile
openssl verify ./dev.bdauphin.io.pem
CN = dev.bdauphin.io.pem
error 20 at 0 depth lookup: unable to get local issuer certificate
error ./dev.bdauphin.io: verification failed
openssl verify -CAfile ./bdauphin.io_intermediate_certificate.pem ./dev.bdauphin.io.pem
./dev.bdauphin.io: OK
Test certificate validation + right adresses
for certif in * ; do openssl verify -CAfile ../baptiste-dauphin.io_intermediate_certificate.pem $certif ; done
dev.baptiste-dauphin.io.pem: OK
plive.baptiste-dauphin.io.pem: OK
www.baptiste-dauphin.io.pem: OK
for certif in * ; do openssl x509 -in $certif -noout -text | egrep '(Subject|DNS):' ; done
Subject: CN = dev.baptiste-dauphin.com
DNS:dev.baptiste-dauphin.com, DNS:dav-dev.baptiste-dauphin.com, DNS:provisionning-dev.baptiste-dauphin.com, DNS:share-dev.baptiste-dauphin.com
Subject: CN = plive.baptiste-dauphin.com
DNS:plive.baptiste-dauphin.com, DNS:dav-plive.baptiste-dauphin.com, DNS:provisionning-plive.baptiste-dauphin.com, DNS:share-plive.baptiste-dauphin.com
Subject: CN = www.baptiste-dauphin.com
DNS:www.baptiste-dauphin.com, DNS:dav.baptiste-dauphin.com, DNS:provisionning.baptiste-dauphin.com, DNS:share.baptiste-dauphin.com
args | comments |
---|---|
-host host | use -connect instead |
-port port | use -connect instead |
-connect host:port | who to connect to (default is localhost:4433) |
-verify_hostname host | check peer certificate matches "host" |
-verify_email email | check peer certificate matches "email" |
-verify_ip ipaddr | check peer certificate matches "ipaddr" |
-verify arg | turn on peer certificate verification |
-verify_return_error | return verification errors |
-cert arg | certificate file to use, PEM format assumed |
-certform arg | certificate format (PEM or DER) PEM default |
-key arg | Private key file to use, in cert file if not specified but cert file is. |
-keyform arg | key format (PEM or DER) PEM default |
-pass arg | private key file pass phrase source |
-CApath arg | PEM format directory of CA's |
-CAfile arg | PEM format file of CA's |
-trusted_first | Use trusted CA's first when building the trust chain |
-no_alt_chains | only ever use the first certificate chain found |
-reconnect | Drop and re-make the connection with the same Session-ID |
-pause | sleep(1) after each read(2) and write(2) system call |
-prexit | print session information even on connection failure |
-showcerts | show all certificates in the chain |
-debug | extra output |
-msg | Show protocol messages |
-nbio_test | more ssl protocol testing |
-state | print the 'ssl' states |
-nbio | Run with non-blocking IO |
-crlf | convert LF from terminal into CRLF |
-quiet | no s_client output |
-ign_eof | ignore input eof (default when -quiet) |
-no_ign_eof | don't ignore input eof |
-psk_identity arg | PSK identity |
-psk arg | PSK in hex (without 0x) |
-ssl3 | just use SSLv3 |
-tls1_2 | just use TLSv1.2 |
-tls1_1 | just use TLSv1.1 |
-tls1 | just use TLSv1 |
-dtls1 | just use DTLSv1 |
-fallback_scsv | send TLS_FALLBACK_SCSV |
-mtu | set the link layer MTU |
-no_tls1_2/-no_tls1_1/-no_tls1/-no_ssl3/-no_ssl2 | turn off that protocol |
-bugs | Switch on all SSL implementation bug workarounds |
-cipher | preferred cipher to use, use the 'openssl ciphers' command to see what is available |
-starttls prot | use the STARTTLS command before starting TLS for those protocols that support it, where 'prot' defines which one to assume. Currently, only "smtp", "pop3", "imap", "ftp", "xmpp", "xmpp-server", "irc", "postgres", "lmtp", "nntp", "sieve" and "ldap" are supported. |
-xmpphost host | Host to use with "-starttls xmpp[-server]" |
-name host | Hostname to use for "-starttls lmtp" or "-starttls smtp" |
-krb5svc arg | Kerberos service name |
-engine id | Initialise and use the specified engine -rand file:file:... |
-sess_out arg | file to write SSL session to |
-sess_in arg | file to read SSL session from |
-servername host | Set TLS extension servername in ClientHello |
-tlsextdebug | hex dump of all TLS extensions received |
-status | request certificate status from server |
-no_ticket | disable use of RFC4507bis session tickets |
-serverinfo types | send empty ClientHello extensions (comma-separated numbers) |
-curves arg | Elliptic curves to advertise (colon-separated list) |
-sigalgs arg | Signature algorithms to support (colon-separated list) |
-client_sigalgs arg | Signature algorithms to support for client certificate authentication (colon-separated list) |
-nextprotoneg arg | enable NPN extension, considering named protocols supported (comma-separated list) |
-alpn arg | enable ALPN extension, considering named protocols supported (comma-separated list) |
-legacy_renegotiation | enable use of legacy renegotiation (dangerous) |
-use_srtp profiles | Offer SRTP key management with a colon-separated profile list |
-keymatexport label | Export keying material using label |
-keymatexportlen len | Export len bytes of keying material (default 20) |
ls -l /usr/local/share/ca-certificates
ls -l /etc/ssl/certs/
sudo update-ca-certificates
Will generates both private key and csr token
openssl req -nodes -newkey rsa:4096 -sha256 -keyout $(SUB.MYDOMAIN.TLD).key -out $(SUB.MYDOMAIN.TLD).csr -subj "/C=FR/ST=France/L=PARIS/O=My Company/CN=$(SUB.MYDOMAIN.TLD)"
# generate private key
openssl ecparam -out $(SUB.MYDOMAIN.TLD).key -name sect571r1 -genkey
# generate csr
openssl req -new -sha256 -key $(SUB.MYDOMAIN.TLD).key -nodes -out $(SUB.MYDOMAIN.TLD).csr -subj "/C=FR/ST=France/L=PARIS/O=My Company/CN=$(SUB.MYDOMAIN.TLD)"
You can verify the content of your csr token here : DigiCert Tool
print jails
fail2ban-client status
get banned ip and other info about a specific jail
fail2ban-client status ssh
set banip triggers email send
fail2ban-client set ssh banip 10.10.10.10
unbanip
fail2ban-client set ssh unbanip 10.10.10.10
check a specific fail2ban chain
iptables -nvL f2b-sshd
fail2ban-client get dbpurgeage
fail2ban-client get dbfile
fail2ban will send mail using the MTA (mail transfer agent)
grep "mta =" /etc/fail2ban/jail.conf
mta = sendmail
global default config
- /etc/fail2ban/jail.conf
will be override with this parameters Centralized Control file This is here we enable jails
- /etc/fail2ban/jail.local
stands for Network Time Protocol
Debian, Ubuntu, Fedora, CentOS, and most operating system vendors, don't package NTP into client and server packages separately. When you install NTP, you've made your computer both a server, and a client simultaneously.
As a client, rather than pointing your servers to static IP addresses, you may want to consider using the NTP pool project. Various people all over the world have donated their stratum 1 and stratum 2 servers to the pool, Microsoft, XMission, and even myself have offered their servers to the project. As such, clients can point their NTP configuration to the pool, which will round robin and load balance which server you will be connecting to.
There are a number of different domains that you can use for the round robin. For example, if you live in the United States, you could use:
- 0.us.pool.ntp.org
- 1.us.pool.ntp.org
- 2.us.pool.ntp.org
- 3.us.pool.ntp.org
There are round robin domains for each continent, minus Antarctica, and for many countries in each of those continents. There are also round robin servers for projects, such as Ubuntu and Debian:
- 0.debian.pool.ntp.org
- 1.debian.pool.ntp.org
- 2.debian.pool.ntp.org
- 3.debian.pool.ntp.org
On my public NTP stratum 2 server, I run the following command to see its status:
ntpq -pn
remote refid st t when poll reach delay offset jitter
------------------------------------------------------------------------------
*198.60.22.240 .GPS. 1 u 912 1024 377 0.488 -0.016 0.098
+199.104.120.73 .GPS. 1 u 88 1024 377 0.966 0.014 1.379
-155.98.64.225 .GPS. 1 u 74 1024 377 2.782 0.296 0.158
-137.190.2.4 .GPS. 1 u 1020 1024 377 5.248 0.194 0.371
-131.188.3.221 .DCFp. 1 u 952 1024 377 147.806 -3.160 0.198
-217.34.142.19 .LFa. 1 u 885 1024 377 161.499 -8.044 5.839
-184.22.153.11 .WWVB. 1 u 167 1024 377 65.175 -8.151 0.131
+216.218.192.202 .CDMA. 1 u 66 1024 377 39.293 0.003 0.121
-64.147.116.229 .ACTS. 1 u 62 1024 377 16.606 4.206 0.216
We need to understand each of the columns, so we understand what this is saying:
Column | Meaning |
---|---|
remote | The remote server you wish to synchronize your clock with |
refid | The upstream stratum to the remote server. For stratum 1 servers, this will be the stratum 0 source. |
st | The stratum level, 0 through 16. |
t | The type of connection. Can be "u" for unicast or manycast, "b" for broadcast or multicast, "l" for local reference clock, "s" for symmetric peer, "A" for a manycast server, "B" for a broadcast server, or "M" for a multicast server |
when | The last time when the server was queried for the time. Default is seconds, or "m" will be displayed for minutes, "h" for hours and "d" for days. |
poll | How often the server is queried for the time, with a minimum of 16 seconds to a maximum of 36 hours. It's also displayed as a value from a power of two. Typically, it's between 64 seconds and 1024 seconds. |
reach | This is an 8-bit left shift octal value that shows the success and failure rate of communicating with the remote server. Success means the bit is set, failure means the bit is not set. 377 is the highest value. |
delay | This value is displayed in milliseconds, and shows the round trip time (RTT) of your computer communicating with the remote server. |
offset | This value is displayed in milliseconds, using root mean squares, and shows how far off your clock is from the reported time the server gave you. It can be positive or negative. |
jitter | This number is an absolute value in milliseconds, showing the root mean squared deviation of your offsets. |
Next to the remote server, you'll notice a single character. This character is referred to as the "tally code", and indicates whether or not NTP is or will be using that remote server in order to synchronize your clock. Here are the possible values:
remote single character | Meaning |
---|---|
whitespace | Discarded as not valid. Could be that you cannot communicate with the remote machine (it's not online), this time source is a ".LOCL." refid time source, it's a high stratum server, or the remote server is using this computer as an NTP server. |
x | Discarded by the intersection algorithm. |
. | Discarded by table overflow (not used). |
- | Discarded by the cluster algorithm. |
+ | Included in the combine algorithm. This is a good candidate if the current server we are synchronizing with is discarded for any reason. |
# | Good remote server to be used as an alternative backup. This is only shown if you have more than 10 remote servers. |
* | The current system peer. The computer is using this remote server as its time source to synchronize the clock |
o | Pulse per second (PPS) peer. This is generally used with GPS time sources, although any time source delivering a PPS will do. This tally code and the previous tally code "*" will not be displayed simultaneously. |
apt-get install ntp
ntpq -p
vim /etc/ntp.conf
sudo service ntp restart
ntpq -p
ntpstat
unsynchronised
time server re-starting
polling server every 64 s
ntpstat
synchronised to NTP server (10.10.10.10) at stratum 4
time correct to within 323 ms
polling server every 64 s
ntpq -c peers
remote refid st t when poll reach delay offset jitter
======================================================================
hamilton-nat.nu .INIT. 16 u - 64 0 0.000 0.000 0.001
ns2.telecom.lt .INIT. 16 u - 64 0 0.000 0.000 0.001
fidji.daupheus. .INIT. 16 u - 64 0 0.000 0.000 0.001
####ย Drift
git remote -v
git branch -v
git remote set-url origin [email protected]:GROUP/SUB_GROUP/project_name
create tag at your current commit
git tag temp_tag_2
By default tags are not pushed, nor pulled
git push origin tag_1 tag_2
list tag
git tag -l
delete tag
git tag -d temp_tag_2
Get the current tag
git describe --tags --exact-match HEAD
git checkout dev
git checkout master
git checkout branch
git checkout v_0.9
git checkout ac92da0124997377a3ee30f3159cdee838bd5b0b
Get the current branch name
git branch | grep \* | cut -d ' ' -f2
Specific file
git diff -- human/lvm.md
Global diff between your unstagged changes (workspace) and the index
git diff
Global diff between your stagged changes (index) and local repository
git diff --staged
To list the stashed modifications
git stash list
To show files changed in the last stash
git stash show
So, to view the content of the most recent stash, run
git stash show -p
To view the content of an arbitrary stash, run something like
git stash show -p stash@{1}
In case of conflict when pulling, by default git will conserve both version of file(s)
git pull origin master
git status
You have unmerged paths.
(fix conflicts and run "git commit")
(use "git merge --abort" to abort the merge)
...
...
Unmerged paths:
(use "git add <file>..." to mark resolution)
both modified: path/to/file
git checkout --theirs /path/to/file
git checkout --ours /path/to/file
with Git reset and Git stash go at the previous commit. will uncommit your last changes
git reset --soft HEAD^
hide temporary your work in a dirty magic directory and verify the content of it
git stash
git stash show
Try to FF merge without any conflict
git pull
put back your work by spawning back your modifications
git stash pop
And then commit again
git commit "copy-paste history commit message :)"
You can tell him that you want your modifications take precedance So, in that case of merge conflict cancel your conflict by cancel the current merge,
git merge --abort
Then, pull again telling git to keep YOUR local changes
git pull -X ours origin master
Or if you want to keep only the REMOTE work
git pull -X theirs origin master
git log --author="b.dauphin" \
--since="2 week ago"
git log --author="b.dauphin" \
-3
git log --since="2 week ago" \
--pretty=format:"%an"
git log --author="b.dauphin" \
--since="2 week ago" \
--pretty=format:"%h - %an, %ar : %s"
git show 01624bc338d4a89c09ba2915ff25ce08174b8e93 3d9228fa99eab6c208590df91eb2af05daad8b40
git log --follow -p -- file
git --no-pager log --follow -p -- file
The git revert command can be considered an 'undo' type command, however, it is not a traditional undo operation. INSTEAD OF REMOVING the commit from the project history, it figures out how to invert the changes introduced by the commit and appends a new commit with the resulting INVERSE CONTENT. This prevents Git from losing history, which is important for the integrity of your revision history and for reliable collaboration.
Reverting should be used when you want to apply the inverse of a commit from your project history. This can be useful, for example, if youโre tracking down a bug and find that it was introduced by a single commit. Instead of manually going in, fixing it, and committing a new snapshot, you can use git revert to automatically do all of this for you.
git revert <commit hash>
git revert c6c94d459b4e1ed81d523d53ef81b6a4744eac12
find a specific commit
git log --pretty=format:"%h - %an, %ar : %s"
The git reset command is a complex and versatile tool for undoing changes. It has three primary forms of invocation. These forms correspond to command line arguments --soft, --mixed, --hard. The three arguments each correspond to Git's three internal state management mechanism's, The Commit Tree (HEAD), The Staging Index, and The Working Directory.
Git reset & three trees of Git
To properly understand git reset usage, we must first understand Git's internal state management systems. Sometimes these mechanisms are called Git's "three trees".
git commit ...
git reset --soft HEAD^
edit
git commit -a -c ORIG_HEAD
First time, clone a repo including its submodules
git clone --recurse-submodules -j8 [email protected]:FataPlex/documentation.git
Update an existing local repositories after adding submodules or updating them
git pull --recurse-submodules
git submodule update --init --recursive
- Delete the relevant section from the .gitmodules file
- Stage the .gitmodules changes git add .gitmodules
- Delete the relevant section from .git/config
- Run git rm --cached path_to_submodule (no trailing slash)
- Run rm -rf .git/modules/path_to_submodule (no trailing slash).
- Commit git commit -m "Removed submodule"
- Delete the now untracked submodule files rm -rf path_to_submodule
Depuis le shell, avant de rentrer dans une session tmux
tmux ls
tmux new
tmux new -s session
tmux attach
tmux attach -t session_name
tmux kill-server : kill all sessions
:setw synchronize-panes on
:setw synchronize-panes off
:set-window-option xterm-keys on
set-window-option -g xterm-keys on
Ctrl + B : (to press each time before another command)
Command | meaning |
---|---|
Flรจches | = se dรฉplacer dans le splitage des fenรชtres |
N | "Next window" |
P | "Previous window" |
z | : zoom in/out in the current span |
d | : detach from the current and let it running on the background (to be reattached to later) |
x | : kill |
% | vertical split |
" | horizontal split |
o | : swap panes |
q | : show pane numbers |
x | : kill pane |
+ | : break pane into window (e.g. to select text by mouse to copy) |
- | : restore pane from window |
โฝ | : space - toggle between layouts |
q | (Show pane numbers, when the numbers show up type the key to goto that pane) |
{ | (Move the current pane left) |
} | (Move the current pane right) |
z | toggle pane zoom |
":set synchronise-panes on" : | synchronise_all_panes in the current session (to execute parallel tasks like multiple iperfs client)" |
There are three main functions that make up an e-mail system.
- First there is the Mail User Agent (MUA) which is the program a user actually uses to compose and read mails.
- Then there is the Mail Transfer Agent (MTA) that takes care of transferring messages from one computer to another.
- And last there is the Mail Delivery Agent (MDA) that takes care of delivering incoming mail to the user's inbox.
Function | Name | Tool which do this |
---|---|---|
Compose and read | MUA (User Agent) | mutt, thunderbird |
Transferring | MTA (Transfer Agent) | msmtp, exim4, thunderbird |
Delivering incoming mail to user's inbox | MDA (Devliery agent) | exim4, thunderbird |
It exists two types of MTA (Mail Transfert Agent)
- Mail server : like postfix, or sendmail-server
- SMTP client, which only forward to a SMTP relay : like ssmtp (deprecated since 2013), use mstmp instead,
which sendmail
/usr/sbin/sendmail
ls -l /usr/sbin/sendmail
lrwxrwxrwx 1 root root 5 Jul 15 2014 /usr/sbin/sendmail -> ssmtp
In this case, ssmtp in my mail sender
msmtp est un client SMTP trรจs simple et facile ร configurer pour l'envoi de courriels. Son mode de fonctionnement par dรฉfaut consiste ร transfรฉrer les courriels au serveur SMTP que vous aurez indiquรฉ dans sa configuration. Ce dernier se chargera de distribuer les courriels ร leurs destinataires. Il est entiรจrement compatible avec sendmail, prend en charge le transport sรฉcurisรฉ TLS, les comptes multiples, diverses mรฉthodes dโauthentification et les notifications de distribution.
Installation
apt install msmtp msmtp-mta
vim /etc/msmtprc
hashtag Valeurs par dรฉfaut pour tous les comptes.
defaults
auth on
tls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile ~/.msmtp.log
hashtag Exemple pour un compte Gmail
account gmail
host smtp.gmail.com
port 587
from [email protected]
user username
password plain-text-password
hashtag Dรฉfinir le compte par dรฉfaut
account default : gmail
Test email sending
echo -n "Subject: hello\n\nDo see my mail" | sendmail [email protected]
You run the command... and, oops: sendmail: Cannot open mailhub:25. The reason for this is that we didn't provide mailhub settings at all. In order to forward messages, you need an SMTP server configured. That's where SSMTP performs really well: you just need to edit its configuration file once, and you are good to go.
Note that it also works with netcat
nc smtp.free.fr 25
telnet smtp.free.fr 25
Trying 212.27.48.4...
Connected to smtp.free.fr.
Escape character is '^]'.
220 smtp4-g21.free.fr ESMTP Postfix
HELO test.domain.com
250 smtp4-g21.free.fr
MAIL FROM:<[email protected]>
250 2.1.0 Ok
RCPT TO:<[email protected]>
250 2.1.5 Ok
DATA
354 End data with <CR><LF>.<CR><LF>
Subject: test message
This is the body of the message!
.
250 2.0.0 Ok: queued as 2D8FD4C80FF
quit
221 2.0.0 Bye
Connection closed by foreign host.
It just plugs into
tshark -f "udp port 53" -Y "dns.qry.type == A and dns.flags.response == 0"
count total dns query
tshark -f "udp port 53" -n -T fields -e dns.qry.name | wc -l
tshark -i wlan0 -Y http.request -T fields -e http.host -e http.user_agent
tshark -r example.pcap -Y http.request -T fields -e http.host -e http.user_agent | sort | uniq -c | sort -n
tshark -r example.pcap -Y http.request -T fields -e http.host -e ip.dst -e http.request.full_uri
tshark -r example.pcap -Y http.request -T fields -e http.host -e ip.dst -e http.request.full_uri
Search into LDAP
ldapsearch --help
-H URI LDAP Uniform Resource Identifier(s)
-x Simple authentication
-W prompt for bind password
-D binddn bind DN
-b basedn base dn for search
SamAccountName SINGLE-VALUE attribute that is the logon name used to support clients and servers from a previous version of Windows.
ldapsearch -H ldap://10.10.10.10 \
-x \
-W \
-D "user@fqdn" \
-b "ou=ou,dc=sub,dc=under,dc=com" "(sAMAccountName=b.dauphin)"
modify an acount (remotly)
apt install ldap-utils
ldapmodify \
-H ldaps://ldap.company.tld \
-D "cn=b.dauphin,ou=people,c=fr,dc=company,dc=fr" \
-W \
-f b.gates.ldif
(.ldif must contains modification data)
slapcat -f b.gates.ldif
will prompt you the string you wanna hash, and generate it in stout
slappasswd -h {SSHA}
dn: [email protected],ou=people,c=fr,dc=company,dc=fr
changetype: modify
replace: userPassword
userPassword: {SSHA}0mBz0/OyaZqOqXvzXW8TwE8O/Ve+YmSl
--list=$ARG | definition |
---|---|
pre,un,unaccepted | list unaccepted/unsigned keys. |
acc or accepted | list accepted/signed keys. |
rej or rejected | list rejected keys |
den or denied | list denied keys |
all | list all above keys |
salt -S 192.168.40.20 test.version
salt -S 192.168.40.0/24 test.version
compound match
salt -C '[email protected]/24 and G@os:Debian' test.version
salt -C '( G@environment:staging or G@environment:production ) and G@soft:redis*' test.ping
salt '*' network.ip_addrs
salt '*' cmd.run
salt '*' state.Apply
salt '*' test.ping
salt '*' test.version
salt '*' grains.get
salt '*' grains.item
salt '*' grains.items
salt '*' grains.ls
salt-run survey.diff '*' cmd.run "ls /home"
Forcibly removes all caches on a minion.
WARNING: The safest way to clear a minion cache is by first stopping the minion and then deleting the cache files before restarting it.
soft way
salt '*' saltutil.clear_cache
sure way
systemctl stop salt-minion \
&& rm -rf /var/cache/salt/minion/ \
&& systemctl start salt-minion
SaltStack - pillar, custom modules, states, beacons, grains, returners, output modules, renderers, and utils
Signal the minion to refresh the pillar data.
salt '*' saltutil.refresh_pillar
synchronizes custom modules, states, beacons, grains, returners, output modules, renderers, and utils.
salt '*' saltutil.sync_all
- SSDs
- biosreleasedate
- biosversion
- cpu_flags
- cpu_model
- cpuarch
- disks
- dns
- domain
- fqdn
- fqdn_ip4
- fqdn_ip6
- gid
- gpus
- groupname
- host
- hwaddr_interfaces
- id
- init
- ip4_gw
- ip4_interfaces
- ip6_gw
- ip6_interfaces
- ip_gw
- ip_interfaces
- ipv4
- ipv6
- kernel
- kernelrelease
- kernelversion
- locale_info
- localhost
- lsb_distrib_codename
- lsb_distrib_id
- machine_id
- manufacturer
- master
- mdadm
- mem_total
- nodename
- num_cpus
- num_gpus
- os
- os_family
- osarch
- oscodename
- osfinger
- osfullname
- osmajorrelease
- osrelease
- osrelease_info
- path
- pid
- productname
- ps
- pythonexecutable
- pythonpath
- pythonversion
- saltpath
- saltversion
- saltversioninfo
- selinux
- serialnumber
- server_id
- shell
- swap_total
- systemd
- uid
- username
- uuid
- virtual
- zfs_feature_flags
- zfs_support
- zmqversion
os:
Debian
os_family:
Debian
osarch:
amd64
oscodename:
stretch
osfinger:
Debian-9
osfullname:
Debian
osmajorrelease:
9
osrelease:
9.5
osrelease_info:
- 9
- 5
Upgrade Salt-Minion:
cmd.run:
- name: |
exec 0>&- # close stdin
exec 1>&- # close stdout
exec 2>&- # close stderr
nohup /bin/sh -c 'salt-call --local pkg.install salt-minion && salt-call --local service.restart salt-minion' &
- onlyif: "[[ $(salt-call --local pkg.upgrade_available salt-minion 2>&1) == *'True'* ]]"
Upgrade salt-minion bash script
{% set ipaddr = grains['fqdn_ip4'][0] %}
{% if (key | regex_match('.*dyn.company.tld.*', ignorecase=True)) != None %}
salt -C "minion.local or minion2.local" \
> cmd.run "docker run debian /bin/bash -c 'http_proxy=http://10.100.100.100:1598 apt update ; http_proxy=http://10.100.100.100:1598 apt install netcat -y ; nc -zvn 10.3.3.3 3306' | grep open"
minion.local:
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
debconf: delaying package configuration, since apt-utils is not installed
(UNKNOWN) [10.3.3.3] 3306 (?) open
minion2.local:
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
debconf: delaying package configuration, since apt-utils is not installed
(UNKNOWN) [10.3.3.3] 3306 (?) open
Will print you the GRANTS for the user
echo "enter your password" ; read -s password ; \
salt "*" \
cmd.run "docker pull imega/mysql-client ; docker run --rm imega/mysql-client mysql --host=10.10.10.10 --user=b.dauphin --password=$password --database=db1 --execute='SHOW GRANTS FOR CURRENT_USER();'" \
env='{"http_proxy": "http://10.10.10.10:9999"}'
Validate config before reload/restart
apachectl configtest
pronouced 'Engine X'
Various variables HTTP variables
example redirect HTTP to HTTPS
server {
listen 80;
return 301 https://$host$request_uri;
}
The Domain Name System is a hierarchical
and decentralized
naming system for computers, services, or other resources connected to the Internet or a private network.
It associates various information with domain names assigned to each of the participating entities.
https://wiki.csnu.org/index.php/Installation_et_configuration_de_bind9
the process name of bind9 is "named"
name server control utility
Write (dump) cache of named in default file (/var/cache/bind/named_dump.db)
dumpdb [-all|-cache|-zones|-adb|-bad|-fail] [view ...]
rndc dumpdb -cache default_any
enable query logging in default location (/var/log/bind9/query.log)
rndc querylog [on|off]
toggle querylog mode
rndc querylog
flush Flushes all of the server's caches.
rndc flush
flush [view] Flushes the server's cache for a view.
rndc flush default_any
get unic master zone loaded
named-checkconf -z 2> /dev/null | grep 'zone' | sort -u | awk '{print $2}' | rev | cut --delimiter=/ -f2 | rev | sort -u
named-checkconf -z 2> /dev/null | grep 'zone' | grep -v 'bad\|errors' | sort -u | awk '{print $2}' | rev | cut --delimiter=/ -f2 | rev | sort -u
keep cache
systemctl reload bind9
empty cache
systemctl restart bind9
dig @8.8.8.8 +short www.qwant.com +nodnssec
dig @8.8.8.8 +short google.com +notcp
dig @8.8.8.8 +noall +answer +tcp www.qwant.com A
dig @8.8.8.8 +noall +answer +notcp www.qwant.com A
others options
- +short
- +(no)tcp
- +(no)dnssec
- +noall
- +answer
- type
Verify your url
https://zabbix.company/zabbix.php?action=dashboard.view
https://zabbix.company/zabbix/zabbix.php?action=dashboard.view
### Test a given item
zabbix_agentd -t system.hostname
zabbix_agentd -t system.swap.size[all,free]
zabbix_agentd -t vfs.file.md5sum[/etc/passwd]
zabbix_agentd -t vm.memory.size[pavailable]
### print all known items
zabbix_agentd -p
curl \
-d '{
"jsonrpc":"2.0",
"method":"apiinfo.version",
"id":1,
"auth":null,
"params":{}
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.company/api_jsonrpc.php | jq .
curl \
-d '{
"jsonrpc": "2.0",
"method": "user.login",
"params": {
"user": "b.dauphin",
"password": "toto"
},
"id": 1,
"auth": null
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.company/api_jsonrpc.php | jq .
replace $host and $token
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"filter": {
"host": [
"$host"
]
},
"with_triggers": "82567"
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.company/api_jsonrpc.php | jq .
Replace $hostname1
,$hostname2
and $token
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid"],
"filter": {
"host": [
""$hostname1","$hostname2"
]
}
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.tld/api_jsonrpc.php | jq '.result'
Replace $hostname1
,$hostname2
and $token
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid"],
"selectGroups": "extend",
"filter": {
"host": [
"$hostname1","$hostname2"
]
}
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.tld/api_jsonrpc.php | jq .
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["name"],
"selectTags": "extend",
"tags": [
{
"tag": "environment",
"value": "dev",
"operator": 1
}
]
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.company/api_jsonrpc.php | jq .
Output hostid, host and name
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid","host","name"],
"tags": [
{
"tag": "environment",
"value": "dev",
"operator": 1
}
]
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.company/api_jsonrpc.php | jq .
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["name"],
"tags": [
{
"tag": "app",
"value": "swarm",
"operator": "1"
},
{
"tag": "environment",
"value": "dev",
"operator": "1"
}
]
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.tld/api_jsonrpc.php | jq .
Warning
erase all others tags + can set only one tag... So I do not recommend using this shity feature.
curl \
-d '{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "12345",
"tags": [
{
"tag": "environment",
"value": "staging"
}
]
},
"id": 2,
"auth": "$token"
}' \
-H "Content-Type: application/json-rpc" \
-X POST https://zabbix.tld/api_jsonrpc.php | jq '.result'
By default, each index in Elasticsearch is allocated 5 primary shards and 1 replica which means that if you have at least two nodes in your cluster, your index will have 5 primary shards and another 5 replica shards (1 complete replica) for a total of 10 shards per index.
Meaning | end point (http://ip:9200) |
---|---|
Nodes name, load, heap, Disk used, segments, JDK version | /_cat/nodes?v&h=name,ip,load_1m,heapPercent,disk.used_percent,segments.count,jdk |
info plus prรฉcises sur les index | /_cat/indices/INDEX/?v&h=index,health,pri,rep,docs.count,store.size,search.query_current,segments,memory.total |
compter le nombre de doc | /_cat/count/INDEX/?v&h=dc |
savoir l'รฉtat du cluster ร un instant T | /_cat/health |
full stats index | /INDEX/_stats?pretty=true |
Kopg plugin | /_plugin/kopf |
Very good tutorial https://blog.ruanbekker.com/blog/2017/11/22/using-elasticdump-to-backup-elasticsearch-indexes-to-json/
Warning
DO NOT BACKUP with wildcard matching
I tested to backup indexes with wildcard, It works but when you want to put back the data, elasticdump takes ALL the DATA from ALL index from the the json file to feed the one you provide in the url. Exemple :
elasticdump --input=es_test-index-wildcard.json --output=http://localhost:9200/test-index-1 --type=data
In this exemple the file es_test-index-wildcard.json was the result of the following command, which matches 2 indexes (test-index-1 and test-index-2)
elasticdump --input=http://localhost:9200/test-index-* --output=es_test-index-1.json --type=data
So, I'll have to manually expand all various indexes in order to back them up !
Elasticsearch Cluster Topology
Change the future index sharding and and replicas and other stuff.
For example, if you have a mono-node cluster, you don't want any replica nor sharding.
curl -X POST '127.0.0.1:9200/_template/default' \
-H 'Content-Type: application/json' \
-d '
{
"index_patterns": ["*"],
"order": -1,
"settings": {
"number_of_shards": "1",
"number_of_replicas": "0"
}
}
' \
| jq .
check config
php-fpm7.2 -t
haproxy -f /etc/haproxy/haproxy.cfg -c -V
Meaning of various status codes
version
java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
Java doesn't use system CA but a specific keystore
You can manage the keystore with keytool
keytool -list -v -keystore /usr/jdk64/jdk1.7.0_62/jre/lib/security/cacerts
keytool -import -alias dolphin_ltd_root_ca -file /etc/pki/ca-trust/source/anchors/dolphin_ltd_root_ca.crt -keystore /usr/jdk64/jdk1.7.0_62/jre/lib/security/cacerts
keytool -import -alias dolphin_ltd_subordinate_ca -file /etc/pki/ca-trust/source/anchors/dolphin_ltd_subordinate_ca.crt -keystore /usr/jdk64/jdk1.7.0_62/jre/lib/security/cacerts
keytool -delete -alias dolphin_ltd_root_ca -keystore /usr/jdk64/jdk1.7.0_62/jre/lib/security/cacerts
keytool -delete -alias dolphin_ltd_subordinate_ca -keystore /usr/jdk64/jdk1.7.0_62/jre/lib/security/cacerts
{% %}
{%- %}
{% -%}
{%- -%}
(By default) add an empty line before jinja rendering
and add one after
{% set site_url = 'www.' + domain %}
remove the empty line before jinja rendering
and add one after
{%- set site_url = 'www.' + domain %}
add the empty line before jinja rendering
and remove one after
{% set site_url = 'www.' + domain -%}
remove the empty line before jinja rendering
and remove one after
{%- set site_url = 'www.' + domain -%}
Symbol | Meaning |
---|---|
() | tuple |
[] | list |
{} | dictionary |
Work with variables, if you don't know if the variable exists Jinja2 example
{% if min_verbose_level is defined
and min_verbose_level %}
and level({{ min_verbose_level }} .. emerg);
{% endif %}
list all versions of python (system wide)
ls -ls /usr/bin/python*
install pip3
apt-get install build-essential python3-dev python3-pip
install a package
pip install virtualenv
pip --proxy http://10.10.10.10:5000 install docker
install without TLS verif (not recommended)
pip install --trusted-host pypi.python.org \
--trusted-host github.com \
https://github.com/Exodus-Privacy/exodus-core/releases/download/v1.0.13/exodus_core-1.0.13.tar.gz
Show information about one or more installed packages
pip3 show $package_name
pip3 show virtualenv
print all installed package (depends on your environement venv or system-wide)
pip3 freeze
install from local sources (setup.py required)
python setup.py install --record files.txt
print dependencies tree of a specified package
pipdeptree -p uwsgi
global site-packages ("dist-packages") directories
python3 -m site
more concise list
python3 -c "import site; print(site.getsitepackages())"
Note: With virtualenvs getsitepackages is not available, sys.path from above will list the virtualenv s site-packages directory correctly, though.
Create python package (to be downloaded in site-packages local dir)
-----------------------------
some_root_dir/
|-- README
|-- setup.py
|-- an_example_pypi_project
| |-- __init__.py
| |-- useful_1.py
| |-- useful_2.py
|-- tests
|-- |-- __init__.py
|-- |-- runall.py
|-- |-- test0.py
----------------------------
Utility function to read the README file.
Used for the long_description.
It's nice, because now
- we have a top level README file
- it's easier to type in the README file than to put a raw string in below ...
import os
from setuptools import setup
def read(fname):
return open(os.path.join(os.path.dirname(__file__), fname)).read()
setup(
name = "an_example_pypi_project",
version = "0.0.4",
author = "Andrew Carter",
author_email = "[email protected]",
description = ("An demonstration of how to create, document, and publish "
"to the cheese shop a5 pypi.org."),
license = "BSD",
keywords = "example documentation tutorial",
url = "http://packages.python.org/an_example_pypi_project",
packages=['an_example_pypi_project', 'tests'],
long_description=read('README'),
classifiers=[
"Development Status :: 3 - Alpha",
"Topic :: Utilities",
"License :: OSI Approved :: BSD License",
],
)
within the root directory
Your package have been built in /dist/$(package-name)-$(version)-$(py2-compatible)-$(py3-compatible)-any.whl
python setup.py sdist bdist_wheel
example : ./dist/dns_admin-1.0.0-py2-none-any.whl
apt install python-pip python3-pip
pip install pipenv
Usage Examples:
Create a new project using Python 3.7, specifically:
$ pipenv --python 3.7
Remove project virtualenv (inferred from current directory):
$ pipenv --rm
Install all dependencies for a project (including dev):
$ pipenv install --dev
Create a lockfile containing pre-releases:
$ pipenv lock --pre
Show a graph of your installed dependencies:
$ pipenv graph
Check your installed dependencies for security vulnerabilities:
$ pipenv check
Install a local setup.py into your virtual environment/Pipfile:
$ pipenv install -e .
Use a lower-level pip command:
$ pipenv run pip freeze
Commands:
check Checks for security vulnerabilities and against
PEP 508 markers provided in Pipfile.
clean Uninstalls all packages not specified in
Pipfile.lock.
graph Displays currently-installed dependency graph
information.
install Installs provided packages and adds them to
Pipfile, or (if no packages are given),
installs all packages from Pipfile.
lock Generates Pipfile.lock.
open View a given module in your editor.
run Spawns a command installed into the virtualenv.
shell Spawns a shell within the virtualenv.
sync Installs all packages specified in
Pipfile.lock.
uninstall Un-installs a provided package and removes it
from Pipfile.
update Runs lock, then sync.
vim /tmp/testPythonProtocols.py
import ssl;
for i in dir(ssl):
if i.startswith("PROTOCOL"):
print(i)
/tmp/testPythonProtocols.py
https://www.rabbitmq.com/management.html
Command | Meaning | default | SaltStack equivalent |
---|---|---|---|
--check | Dry run | no dry run | test=True |
-b, --become | run operations with become | no password prompting | |
-K, --ask-become-pass | ask for privilege escalation password | ||
--become-method=BECOME_METHOD | privilege escalation method to use valid choices: [ sudo su pbrun pfexec doas dzdo ksu runas pmrun enable machinectl sudo | sudo | |
--become-user=BECOME_USER | run operations as this user | root |
Example | meaning |
---|---|
ansible-playbook playbook.yml --user=b.dauphin --become-method=su -b -K | su b.dauphin + password prompting |
ansible-playbook playbook.yml --check --diff --limit 1.2.3.4 | Dry run + show only diff + limit inventory to host 1.2.3.4 |
ansible webservers -m service -a "name=httpd state=restarted"
ansible all -m ping -u user1 --private-key /home/baptiste/.ssh/id_rsa
Specify python interpreter path
ansible 1.2.3.4 -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
list available variables
ansible 10.10.10.10 -m setup
get specific fact
ansible 10.10.10.10 -m setup -a 'filter=ansible_python_version'
enable usage of operations like < > | &
the remote system has to got the package python-apt
apt install python-apt
- debug: var=ansible_facts
ansible-playbook --start-at-task="Gather Networks Facts into Variable"
ansible-playbook --tags "docker_login"
[...]
msg: "{{ lookup('vars', ansible_dns) }}"
[...]
[...]
- name: Gather Networks Facts into Variable
setup:
register: setup
- name: Debug Set Facts
debug:
var: setup.ansible_facts.ansible_python_version
---
- hosts: webservers
vars:
syslog_protocol_lvl_4: udp
syslog_port: 514
ansible_python_interpreter: /bin/python
ansible_ssh_user: root
ansible-playbook release.yml --extra-vars '{"version":"1.23.45","other_variable":"foo"}'
ansible-playbook arcade.yml --extra-vars '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}'
override playbook-defined variables (keep your playbook unmodified)
ansible-playbook lvm.yml --extra-vars "host=es_data_group remote_user=b.dauphin" -i ../inventory/es_data_staging.yml
Full doc of passing-variables-on-the-command-line
Here is the order of precedence from least to greatest (the last listed variables winning prioritization):
- command line values (eg โ-u userโ)
- role defaults [1]
- inventory file or script group vars [2]
- inventory group_vars/all [3]
- playbook group_vars/all [3]
- inventory group_vars/* [3]
- playbook group_vars/* [3]
- inventory file or script host vars [2]
- inventory host_vars/* [3]
- playbook host_vars/* [3]
- host facts / cached set_facts [4]
- play vars
- play vars_prompt
- play vars_files
- role vars (defined in role/vars/main.yml)
- block vars (only for tasks in block)
- task vars (only for the task)
- include_vars
- set_facts / registered vars
- role (and include_role) params
- include params
- extra vars (always win precedence)
npm est le gestionnaire de paquets officiel de Node.js. Depuis la version 0.6.3 de Node.js, npm fait partie de l'environnement et est donc automatiquement installรฉ par dรฉfaut. npm fonctionne avec un terminal et gรจre les dรฉpendances pour une application.
npm config set proxy http://ip:port
npm config set https-proxy http://ip:port
hashtagย Print the effective node_modules FOLDER to standard out.
npm root
npm root -g
hashtag display a tree of every package found in the userโs folders (without the -g option it only shows the current directoryโs packages)
npm list -g --depth 0
hashtag To show the package registry entry for the connect package, you can do this:
npm view ghost-cli
npm info ghost-cli
nvm install 8.9.4
will read yarn.lock (like PipFile.lock)
yarn setup
verify dep tree is ok
yarn --check-files
grep grunt.registerTask Gruntfile.js
[knex-migrator]
varnishadm -S /etc/varnish/secret
hashtag For states of backend
varnishadm -S /etc/varnish/secret debug.health
hashtag new version
varnishadm -S /etc/varnish/secret backend.list
hashtag After a crash of varnish:
varnishadm -S /etc/varnish/secret panic.show
Log hash with filter for request number
varnishlog -c -i Hash
Not enabled by default exemple de commandes pour tracker les requรชtes ayant pris plus de 10 seconde
varnishncsa -F '%t "%r" %s %{Varnish:time_firstbyte}x %{VCL_Log:backend}x' -q "Timestamp:Process[2] > 10.0"
CURL -X PURGE "http://IP/object"
don't do anything just checkconfig
logrotate -d /etc/logrotate/logrotate.conf
logrotate /etc/logrotate.conf -v
/var/log/dpkg.* {
monthly
rotate 12
size 100M
compress
delaycompress
missingok
notifempty
create 644 root root
}
CREATE USER 'newuser'@'localhost' IDENTIFIED BY 'password';
CREATE USER 'api_153'@'10.10.%.%' IDENTIFIED BY 'password';
SELECT user, host FROM mysql.user;
SHOW CREATE USER api
GRANT SELECT, INSERT, UPDATE, DELETE ON `github`.* TO 'api_153'@'10.10.%.%';
GRANT ALL PRIVILEGES ON `github`.`user` TO 'api_153'@'10.10.%.%';
-- Apply GRANT
FLUSH PRIVILEGES;
REVOKE INSERT ON *.* FROM 'jeffrey'@'localhost';
REVOKE ALL PRIVILEGES ON `github`.* FROM 'jeffrey'@'localhost';
show table status like 'mytablename'\G
*************************** 1. row ***************************
Name: mytablename
Engine: MyISAM
Version: 10
Row_format: Dynamic
Rows: 2444
Avg_row_length: 7536
Data_length: 564614700
Max_data_length: 281474976710655
Index_length: 7218176
Data_free: 546194608
Auto_increment: 1187455
Create_time: 2008-03-19 10:33:13
Update_time: 2008-09-02 22:18:15
Check_time: 2008-08-27 23:07:48
Collation: latin1_swedish_ci
Checksum: NULL
Create_options: pack_keys=0
Comment:
From shell (outside of a MySQL prompt)
mysql -u root -p -e 'SHOW VARIABLES WHERE Variable_Name LIKE "%dir";'
Show users and remote client IP or subnet etc
SELECT user, host FROM mysql.user;
select user, host FROM mysql.user WHERE user = 'b.dauphin';
Show current queries
SHOW FULL PROCESSLIST;
%
is a wildcard char like *
SHOW VARIABLES WHERE Variable_Name LIKE "%log%";
SHOW VARIABLES WHERE Variable_Name LIKE "wsrep%";
SHOW STATUS like 'Bytes_received';
SHOW STATUS like 'Bytes_sent';
The file mysql-bin.[index] keeps a list of all binary logs mysqld has generated and auto-rotated. The mechanisms for cleaning out the binlogs in conjunction with mysql-bin.[index] are:
PURGE BINARY LOGS TO 'binlogname';
PURGE BINARY LOGS BEFORE 'datetimestamp';
mysqlbinlog -d github \
--base64-output=DECODE-ROWS \
--start-datetime="2005-12-25 11:25:56" \
pa6.k8s.node.01-bin.000483
SHOW CREATE TABLE user;
SHOW GRANTS FOR user@git.baptiste-dauphin.com;
SELECT table_name AS `Table`, round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB` FROM information_schema.TABLES WHERE table_schema = "github_db1" AND table_name = "table1";
SELECT
table_schema as `Database`,
table_name AS `Table`,
round(((data_length + index_length) / 1024 / 1024), 2) `Size in MB`,
round(((data_length + index_length) / 1024 / 1024 / 1024), 2) `Size in GB`
FROM information_schema.TABLES
ORDER BY table_schema, data_length + index_length DESC;
SELECT table_schema "Database", ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB" FROM information_schema.tables GROUP BY table_schema;
put the .sql.gz file into STDIN of gunzip and then, send to mysql
gunzip < [compressed_filename.sql.gz] | mysql -u [user] -p[password] [databasename]
If you encouter errors like foreign key
gunzip < heros_db.sql.gz | mysql --init-command="SET SESSION FOREIGN_KEY_CHECKS=0;" -u root -p heros
mysql -u baptiste -p -h database.baptiste-dauphin.com -e "SELECT table_schema 'DATABASE_1', ROUND(SUM(data_length + index_length) / 1024 / 1024, 1) 'DB Size in MB' FROM information_schema.tables GROUP BY table_schema;"
Run after an mysql upgrade. Update system tables like performance_schema
mysql_upgrade -u root -p
Test configuration before restart. Will output if some error exist
mysqld --help
Simulate the running config If you would have been started mysql
mysqld --print-defaults
with mysqldump
mysqldump -u root -p \
--all-databases \ # Dump all tables in all databases, WITHOUT 'INFORMATION_SCHEMA' and 'performace_schema'
--add-drop-database \ # Add DROP DATABASE statement before each CREATE DATABASE statement
--ignore-table=DB.table_name \
--skip-add-locks \ # Do not add locks
--skip-lock-tables \
--single-transaction \
> /home/b.dauphin/mysqldump/dump_mysql_.sql
mysqldump -h 10.10.10.10 \
-u baptiste \
-p*********** db1 table1 table2 table3 \
--skip-add-locks \
--skip-lock-tables \
--single-transaction \
| gzip > /home/b.dauphin/backup-`date +%d-%m-%Y-%H:%M:%S`.sql.gz
To export to file (structure only
)
mysqldump -u [user] -p[pass] --no-data mydb > mydb.sql
To export to file (data only
)
mysqldump -u [user] -p[pass] --no-create-info mydb > mydb.sql
Exemple
mysqldump \
-u root \
-p user1 \
--single-transaction \
--skip-add-locks \
--skip-lock-tables \
--skip-set-charset \
--no-data \
> db1_STRUCTURE.sql
mysqldump \
-u root \
-p user1 \
--single-transaction \
--skip-add-locks \
--skip-lock-tables \
--skip-set-charset \
--no-create-info \
> db1_DATA.sql
CREATE DATABASE db1;
mysql -u root -p db1 < db1_STRUCTURE.sql
mysql -u root -p db1 < db1_DATA.sql
To import to database
mysql -u [user] -p[pass] mydb < mydb.sql
or
gunzip < heros_db.sql.gz | mysql --init-command="SET SESSION FOREIGN_KEY_CHECKS=0;" -u root -p heros
(open source, cost-effective, and robust MySQL clustering)
Test replication from reverse proxy
for i in `seq 1 6`; do mysql -u clustercheckuser -p -e "show variables like 'server_id'; select user()" ; done
Get info about master/slave replication
redis-cli -h 10.10.10.10 -p 6379 -a $PASSWORD info replication
FLUSH all keys of all databases
redis-cli FLUSHALL
Delete all keys of the specified Redis database
redis-cli -n <database_number> FLUSHDB
remove keys from file as input
redis --help
-c Enable cluster mode (follow -ASK and -MOVED redirections).
for line in $(cat lines.txt); do redis-cli -a xxxxxxxxx -p 7000 -c del $line; done
Check all databases
CONFIG GET databases
1) "databases"
2) "16"
INFO keyspace
db0:keys=10,expires=0
db1:keys=1,expires=0
db3:keys=1,expires=0
Delete multiples keys
redis-cli -a XXXXXXXXX --raw keys "my_word*" | xargs redis-cli -a XXXXXXXXX del
Resolve warning
cat /etc/systemd/system/disable-transparent-huge-pages.service
[Unit]
Description=Disable Transparent Huge Pages
[Service]
Type=oneshot
ExecStart=/bin/sh -c "/bin/echo "never" | tee /sys/kernel/mm/transparent_hugepage/enabled"
[Install]
WantedBy=multi-user.target
get prompt
influx
SHOW databases;
USE lands
SHOW RETENTION POLICIES ON "lands"
MySQL | Influx |
---|---|
DATABASE | DATABASE |
MEASUREMENT | TABLE |
COLUMN | FIELD && TAG |
SHOW series ON database FROM virtualmachine WHERE cluster = 'PROD'
Each record stored inside of a measurement is known as a point . Points are made up of the following:
- pretime : Timestamp that represents the time in which the data was recorded.
- field : Contain the actual measurement data, e.g 5% CPU utilisation. Each point must contain one or more fields .
- tags : Metadata about the data being recorded, e.g the hostname of the device whose CPU is being monitored. Each point can contain zero or more tags .
(Note that both the fields and tags can be thought of as columns in the database table. Weโll see why in a moment.)
-- default on all measurement
SHOW field keys
-- default on all measurement
SHOW tag keys
SELECT usage_user,cpu,host
FROM cpu
WHERE cpu='cpu-total'
AND host='ubuntu'
AND time > now() - 30s
Even if you add memory with VMWare, debian won't see it free -m
You have to make it 'online'
grep offline /sys/devices/system/memory/*/state | while read line; do echo online > ${line/:*/}; done
can be
- Disk
- SSD
- SD card (mmc)
Your devices are hardwarely recognize by the kernel and then linked on the system with a file (because on linux, everything is a file) by udev (micro device) (systemd-udevd.service) not file system related
dmesg -T
sudo udevadm monitor
dd \
if=/home/baptiste/Downloads/2019-09-26-raspbian-buster-lite.img \
of=/dev/mmcblk0 \
bs=64K \
conv=noerror,sync \
status=progress
How to zeroe the first 512 bytes (MBR size) of a disk
dd \
if=/dev/zero \
of=/dev/mmcblk0 \
bs=512 count=1 \
conv=noerror,sync \
status=progress
LVM stands for Logical Volume Management
Basically, you have 3 nested levels in lvm
- Physical volume (
pv
) - Volume Group (
vg
) - Logical volume (
lv
) which is the only one you can mount on a system
list disks
lsblk
Run fdisk to manage disks
fdisk /dev/sdx
m n p
t #### new partition
8e #### for partition type "Linux LVM"
w #### write changes on disk
Initialize a disk or PARTITION for use by LVM
/dev/sdx : file system path of a physical disk
/dev/sdxX : file system path of a partition of a physical disk
pvcreate /dev/sdxX
Add physical volumes to a volume group (/dev/sdb1)
vgdisplay /dev/sdaX
vgextend system-vg /dev/sdbx
Extend the size of a logical volume
lvextend -l +100%FREE /dev/vg_data/lv_data
/dev/vg_data/lv_data
is still a mountable device with a greater physical size but with the same file system size as previous. So you need to extend the fs to the new extended physical size.
resize2fs /dev/VG_Name/LV_Name
Optional : notice that lvm create a directory with vg name and subfile with lv name
resize2fs /dev/HOSTNAME-vg/root
resize2fs /dev/system-vg/root
Ensure the extend procedure has succeed don't use lsblk
Bbut df
instead
df -h
Create the new directory
mkdir /data
1 - create the physical volume
pvcreate /dev/sdb
pvdisplay
2 - create the volume group
vgcreate vg_NAME /dev/sdb
vgdisplay
3 - create the logical volume
lvcreate -l +100%FREE -n lv_NAME vg_NAME
lvdisplay
4 - Create the file system
mkfs.ext4 /dev/mapper/vg_NAME-lv_NAME
5 - mount the Logicial Volume in /data
mount /dev/mapper/vg_NAME-lv_NAME /data
6 - Optional but recommended. Make persistent the mount of the lv even after reboot.
copy the first line, and replace with LV path
vim /etc/fstab
Follow the instruction
cfdisk /dev/sdb
If you want to remove a physical disk contained in a LV Tutorial
pvs
pvdisplay
vgdisplay
lvdisplay
fdisk -l
# pvdisplay
Couldnt find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
--- Physical volume ---
PV Name /dev/sdb1
VG Name vg_srvlinux
PV Size 931.51 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d
--- Physical volume ---
PV Name unknown device
VG Name vg_srvlinux
PV Size 465.76 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 119234
Free PE 0
Allocated PE 119234
PV UUID EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx
#### vgreduce --removemissing --force vg_srvlinux
Couldnt find device with uuid EvbqlT-AUsZ-MfKi-ZSOz-Lh6L-Y3xC-KiLcYx.
Removing partial LV LogVol00.
Logical volume "LogVol00" successfully removed
Wrote out consistent volume group vg_srvlinux
#### pvdisplay
--- Physical volume ---
PV Name /dev/sdb1
VG Name vg_srvlinux
PV Size 931.51 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 238466
Free PE 238466
Allocated PE 0
PV UUID xhwmxE-27ue-dHYC-xAk8-Xh37-ov3t-frl20d
lspci
list you graphic card
lspci | grep -E "(VGA|3D)" -C 2
lsusb
lscpu : human readable of /proc/cpuinfon
less /proc/cpuinfo
cat /proc/meminfo
nproc
Xrandr common cmd
xrandr --help
xrandr --current
xrandr --output DP-2 --mode 1680x1050 --primary
xrandr --output DP-1 --mode 1280x1024 --right-of DP-2
xrandr --output DP-1 --auto --right-of eDP-1
xrandr --output HDMI-1 --auto --right-of DP-1
Monitor plugged in but not displaying anything
xrandr --auto
sudo dpkg-reconfigure libxrandr2
logout of your current Windows Manager (like I3 or cinnamon, or gnome), then select another one. Then logout and go back to your prefered WM. It may resolve the error.
Blueman is a full featured Bluetooth manager written in GTK.
Be sure to enable the Bluetooth daemon and start Blueman with blueman-applet
. A graphical settings panel can be launched with blueman-manager
or your favourite bluetooth manager.
yaourt -S gtk
yaourt -S blueman
Then run blueman
blueman-applet
Virtualization (OS-level)
OS-level virtualization refers to an operating system paradigm in which the kernel allows the existence of multiple isolated user-space instances. Such instances, called
- containers (Solaris, Docker)
- Zones (Solaris)
- virtual private servers (OpenVZ)
- partitions
- virtual environments (VEs)
- virtual kernel (DragonFly BSD)
- jails (FreeBSD jail or chroot jail)
Those instance may look like real computers from the point of view of programs running in them.
- A computer program running on an ordinary operating system can see all resources (connected devices, files and folders, network shares, CPU power, quantifiable hardware capabilities) of that computer.
- However, programs running inside of a container can only see the container's contents and devices assigned to the container.
Operating-system-level virtualization usually imposes less overhead than full virtualization because programs in virtual partitions use the operating system's normal system call interface and do not need to be subjected to emulation or be run in an intermediate virtual machine, as is the case with full virtualization (such as VMware ESXi, QEMU or Hyper-V) and paravirtualization (such as Xen or User-mode Linux). This form of virtualization also does not require hardware support for efficient performance.
Wikipedia of Virtualization OS-level
Docker CLI -> Docker Engine -> containerd -> containerd-shim -> runC (or other runtime)
Note that dockerd (docker daemon) has no child. The master process of all containers is containerd
.
There is only one containerd-shim by process and it manages the STDIO FIFO and keeps it open for the container in case containerd or Docker dies.
runC is built on libcontainer which is the same container library powering a Docker engine installation. Prior to the version 1.11, Docker engine was used to manage volumes, networks, containers, images etc.. Now, the Docker architecture is broken into four components: Docker engine, containerd, containerd-shm and runC. The binaries are respectively called docker, docker-containerd, docker-containerd-shim, and docker-runc.
To run a container, Docker engine creates the image, pass it to containerd. containerd calls containerd-shim that uses runC to run the container.
Then, containerd-shim allows the runtime (runC in this case) to exit after it starts the container : This way we can run daemon-less containers because we are not having to have the long running runtime processes for containers.
Get pid of containerd
pidof containerd
921
Get child of containerd (i.e. pid of containerd-shim) i.e. search for process who has for parent process containerd ( hence --ppid )
ps -o pid --no-headers --ppid $(pidof containerd)
19485
Get child of containerd-shim (i.e. the real final containerized process)
ps -o pid --no-headers --ppid $(ps -o pid --no-headers --ppid $(pidof containerd))
19502
Get the name of the output process
ps -p $(ps -o pid --no-headers --ppid $(ps -o pid --no-headers --ppid $(pidof containerd))) -o comm=
bash
docker run -d \
--name elasticsearch \
--net somenetwork \
--volume my_app:/usr/share/elasticsearch/data \
-p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
elasticsearch:7.4.1
(no interactive)
docker run debian ls
docker run debian /bin/bash -c 'cd /home ; ls -l'
docker run debian \
/bin/bash -c 'http_proxy=http://10.100.100.100:1598 apt update ; http_proxy=http://10.100.100.100:1598 apt install netcat -y ; nc -zvn 10.3.3.3 3306'
gitlab GET /groups/987/variables/DOCKER_TLS_CERT \
| jq -r .value | \
docker run \
--rm \
-i \
--entrypoint=sh \
frapsoft/openssl \
-c 'openssl x509 -in /dev/stdin -noout -dates 2>/dev/null'
docker volume create my_app
avoid false positives
^ : begin with
$ : end with
docker ps -aqf "name=^containername$"
Print only running command
docker ps --format "{{.Command}}" --no-trunc
Resources management
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}" --no-stream
Info of filesystem
docker inspect -f '{{ json .Mounts }}' $(docker ps -aqf "name=elasticsearch") | jq
(On swarm manager) find where an app is running. Find the last updated date
docker service ps <app_name>
docker service inspect log-master_logstash -f '{{ json .UpdatedAt }}'
print cluster nodes
docker node ls
get address + role
for node in $(docker node ls -q); do docker node inspect --format '{{.Status.Addr}} ({{.Spec.Role}})' $node; done
Print labels of nodes
docker node ls -q | xargs docker node inspect \
-f '[{{ .Description.Hostname }}]: {{ range $k, $v := .Spec.Labels }}{{ $k }}={{ $v }} {{end}}'
swarm manager shell
docker swarm join-token worker
We output a copy pastable bash line, like the following ! (Be carefull it doesn't include listen ip of the worker)
new worker shell
docker swarm join \
--token <TOKEN_WORKER> \
--listen-addr WORKER-LISTEN-IP:2377 \
<MANGER-LISTEN-IP>:2377
Create you a context to work easier
context = given_user
+ given_cluster
+ given_namespace
kubectl config set-context bdauphin-training \
--user b.dauphin-k8s-home-cluster \
--cluster k8s-home-cluster \
--namespace dev-scrapper
Print your current context and cluster info
kubectl config get-contexts
kubectl cluster-info
A Deployment provides declarative updates for Pods and ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
kubectl create deployment nginx-test-deploy --image nginx -n bdauphin-test
I do not recommend to declare a pod directly. Prefer using deploy
Restart a pod The quickest way is to set the number of replica to zero and then, put back your desired number of rep
kubectl scale deployment nginx --replicas=0
kubectl scale deployment nginx --replicas=5
kubectl create service nodeport bdauphin-nginx-test --tcp=8080:80
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. This page provides a series of usage examples demonstrating how to create ConfigMaps and configure Pods using data stored in ConfigMaps.
Most of the time it's a list of key-value pairs
It can be defined as environment variables
and/or
Be mounted into the pod at a specified path
Kubernetes secret objects let you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. Putting this information in a secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image . See Secrets design document for more information.
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise.
complete doc
- Role : defines rules
- Role Binding
Defines
- Rules
- API Groups
default : core API group - resources
ex : pod - verbs
allowed methods
- API Groups
A Role can only be used to grant access to resources within a single namespace. Hereโs an example Role in the โdefaultโ namespace that can be used to grant read access to pods:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""] #### "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
Defines
- Subjects
- Kind
ex : user - name
ex : jane - apiGroup
- Kind
- Role References
- Kind
ex : Role - name
ex : pod-reader - apiGroup
- Kind
A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted. Permissions can be granted within a namespace with a RoleBinding, or cluster-wide with a ClusterRoleBinding.
Example
This role binding allows "jane" to read pods in the "default" namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane #### Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #### this must be Role or ClusterRole
name: pod-reader #### this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress can provide load balancing, SSL termination and name-based virtual hosting.
What is ingress ?
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
internet
|
[ Ingress ]
--|-----|--
[ Services ]
An Ingress can be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
Why use config file instead of CLI ?
- Cli is good for begin, help to understand. But heavy to use everyday
- Often complexe definition, easier to use a config file
- Can version (git)
kubectl get deploy nginx -o yaml | tee nginx-deploy.yaml
kubectl get serviceaccounts/default -n bdauphin-test -o yaml | tee serviceaccounts.yaml
kubectl get pods/nginx-65d61548fd-mfhpr -o yaml | tee pod.yaml
first, get all into your current namespace. Or specify another one
watch -n 1 kubectl get all -o wide
watch -n 1 kubectl get all -o wide -n default
Client : helm
Server : tiller
Helm uses go template render engine
helm create $mychart
helm create elasticsearch
Helm will create a new directory in your project called mychart with
elasticsearch
โโโ charts
โโโ Chart.yaml
โโโ templates
โย ย โโโ deployment.yaml
โย ย โโโ _helpers.tpl
โย ย โโโ ingress.yaml
โย ย โโโ NOTES.txt
โย ย โโโ service.yaml
โย ย โโโ tests
โย ย โโโ test-connection.yaml
โโโ values.yaml
The most important piece of the puzzle is the templates/
directory.
Itโs worth noting however, that the directory is named templates, and Helm runs each file in this directory through a Go template rendering engine.
helm install --dry-run --debug ./elasticsearch
helm install ./elasticsearch
The template in service.yaml makes use of the Helm-specific objects .Chart
and .Values
.
Values | Default | override | meaning |
---|---|---|---|
.Chart |
provides metadata about the chart to your definitions such as the name, or version | ||
.Values |
values.yaml |
--set key=value , --values $file |
key element of Helm charts, used to expose configuration that can be set at the time of deployment |
For more advanced configuration, a user can specify a YAML file containing overrides with the --values
option.
helm install --dry-run --debug ./mychart --set service.internalPort=8080
helm install --dry-run --debug ./mychart --values myCustomeValues.yaml
As you develop your chart, itโs a good idea to run it through the linter to ensure youโre following best practices and that your templates are well-formed. Run the helm lint command to see the linter in action:
helm lint ./mychart
==> Linting ./mychart
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
- Generate your API Key
Official documentation - Get your company id (i.e. sharing_id)
curl -H"Authorization: Apikey $APIKEY" \
"https://api.gandi.net/v5/organization/organizations\?type=company" \
| jq '.[].id'
List all domains
curl -H"X-Api-Key: $APIKEY" \
https://dns.api.gandi.net/api/v5/domains\?sharing_id\=$SHARING_ID \
| jq -r '.[].fqdn' \
> domain.list
Copy data
For each records in a given domain get all records info (type, ttl, name, href, values) and create.
mkdir domains_records
while read domain; do
(curl -H"X-Api-Key: $APIKEY" \
https://dns.api.gandi.net/api/v5/domains/$domain/records\?sharing_id\=$SHARING_ID \
| jq . > ./domains_records/$domain) &
done <domain.list
CentOS specific commands which differs from debian
iptables-save > /etc/sysconfig/iptables
cat /etc/system-release
CentOS Linux release 7.6.1810 (Core)
yum install httpd
yum remove postgresql.x86_64
yum update postgresql.x86_64
yum search firefox
yum info samba-common.i686
yum groupinstall 'DNS Name Server'
yum repolist
yum check-update
yum list | less
yum list installed | less
yum provides /etc/sysconfig/nf
yum grouplist
yum list installed | grep unzip
to be updated...
Installation
mkfs.fat -F32 /dev/sdb5
/dev/sdb5 /efi vfat rw,relatime 0 2
pacman -S grub
grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB_ARCH
grub-mkconfig -o /boot/grub/grub.cfg
pacman -Syu
sudo pacman -Qsq pulseaudio
https://wiki.archlinux.org/index.php/Pacman/Rosetta
sudo pacman -S pulseaudio tree xf86-video-intel mesa-dri opencl-nvidia sudo polkit lxsession kernel headers git gdm terminator keepass firefox
You have just have to use the service NetworkManager, which it way much more simple than other wireless connection manager (like wicd, netctl). NetworkManager will allow you to graphically setup once your wifi settings and autodiscover SSID.
systemctl start NetworkManager
# and enable it at boot, by default no wifi connection manager is enable on archlinux
systemctl enable NetworkManager
Then, fill in your infos in your graphical wifi settings
https://www.raspberrypi.org/downloads/raspbian/
Before, you can download and install awesome packages, you first have to install the Package Control package. It's dumb, but it's not included in sublime-text installation...
The simplest method of installation is through the Sublime Text console. The console is accessed via the ctrl+` shortcut or the View > Show Console menu. Once open, paste the appropriate Python code for your version of Sublime Text into the console.
import urllib.request,os,hashlib; h = '6f4c264a24d933ce70df5dedcf1dcaee' + 'ebe013ee18cced0ef93d5f746d80ef60'; pf = 'Package Control.sublime-package'; ipp = sublime.installed_packages_path(); urllib.request.install_opener( urllib.request.build_opener( urllib.request.ProxyHandler()) ); by = urllib.request.urlopen( 'http://packagecontrol.io/' + pf.replace(' ', '%20')).read(); dh = hashlib.sha256(by).hexdigest(); print('Error validating download (got %s instead of %s), please try manual install' % (dh, h)) if dh != h else open(os.path.join( ipp, pf), 'wb' ).write(by)
official source This code creates the Installed Packages folder for you (if necessary), and then downloads the Package Control.sublime-package into it. The download will be done over HTTP instead of HTTPS due to Python standard library limitations, however the file will be validated using SHA-256.
Open commande palette
ctrl + shift + p
Then, open sublime-text package manager, select Package Control: Install package by shortname insp
insp
Then, enter
Name | Usage | insp name | URL |
---|---|---|---|
Markdown Preview | To see a preview of your README.md files before commit them | MarkdownPreview |
https://facelessuser.github.io/MarkdownPreview/install/ |
Compare Side-By-Side | Compares two tabs | Compare Side-By-Side |
https://packagecontrol.io/packages/Compare%20Side-By-Side |
Generic Config | Syntax generic config colorization | ||
PowerCursors | multiple cursors placements | ||
Materialize | Several beautiful color scheme | ||
MarkdownPreview | Preview your .md file | ||
MarkdownโTOC | Generate your Table of content of MarkDown files |
Name | shortcut |
---|---|
Do anything (command palet) | Ctrl + Shirt + P |
Switch previous / next tab | ctrl + shift + page_up \ ctrl + shift + page_down |
Switch to a specific tab | ctrl + p , and write name of your tab (file) |
Move a line or a block of line | ctrl + shift + arrow up \ ctrl + shift + arrow down |
Switch upper case | Ctrl + k and then Ctrl + u |
Switch lower case | ย Ctrl + k and then Ctrl + l |
Sort Lines | F9 (Edit > Sort Lines) |
Goto anywhere | Ctrl + R |
Open any file | Ctrl + P |
Goto line number | ctrl + G |
Spell check | F6 |
New cursor above/below | alt+shift+arrow |
Lite images: https://downloads.raspberrypi.org/raspbian_lite/images/
With desktop: https://downloads.raspberrypi.org/raspbian/images/
With desktop & recommended software: https://downloads.raspberrypi.org/raspbian_full/images/
Excellent tutorial
https://medium.com/factory-mind/regex-tutorial-a-simple-cheatsheet-by-examples-649dc1c3f285
Online tester
https://regex101.com/
Support highlight syntax GitHub guide - Master Markdown tutorial
Source
gnome-terminal
GNOME Terminal (the default Ubuntu terminal): Open Terminal
โ Preferences
and click on the selected profile under Profiles
. Check Custom font under Text Appearance and select MesloLGS NF Regular
or Hack
or the font you like.
1- Ensure that your terminal is gnome-terminal
update-alternatives --get-selections | grep -i term
x-terminal-emulator manual /usr/bin/gnome-terminal.wrapper
Install dconf
sudo apt-get install dconf-tools
dconf-editor
Run it and go to path org
> gnome
> desktop
> interface
> monospace-font-name
CLI
gsettings offers a simple commandline interface to GSettings. It lets you get, set or monitor an individual key for changes.
To Know current settings type following commands in terminal :
gsettings get org.gnome.desktop.interface document-font-name
gsettings get org.gnome.desktop.interface font-name
gsettings get org.gnome.desktop.interface monospace-font-name
gsettings get org.gnome.nautilus.desktop font
You can set fonts by following commands in terminal :
For example Monospace 11
do not support symbol. Which is uggly if you have a custom shell.
My choises which differs from default :
The last number argument is the size
for terminal
gsettings set org.gnome.desktop.interface monospace-font-name 'Hack 12'
for soft like Keepass2
gsettings set org.gnome.desktop.interface font-name 'Hack 12'
Get list of available fonts
fc-list | more
fc-list | grep -i "word"
fc-list | grep -i UbuntuMono
To lists font faces that cover Hindi language:
fc-list :lang=hi
search by family
fc-list :family="NotoSansMono Nerd Font Mono"
search with complete name
fc-list :fullname="Noto Sans Mono ExtraCondensed ExtraBold Nerd Font Complete Mono"
To find all similar keys on schema type following command:
gsettings list-recursively org.gnome.desktop.interface
To reset all valuses of keys run following command in terminal:
gsettings reset-recursively org.gnome.desktop.interface
In some case, update-alternatives is not enough. Especially for Url Handling or web browsing
xdg-settings - get various settings from the desktop environment
In my use case, I set up update-alternatives but it didn't change the behaviour for URL handling (printed in my terminal especially useful after git push
for creating a merge request).
Correctly setup but doesn't affect behaviour
sudo update-alternatives --config x-www-browser
xdg-settings check default-web-browser brave.desktop
no
xdg-settings --list default-url-scheme-handler
Known properties:
default-url-scheme-handler Default handler for URL scheme
default-web-browser Default web browser
xdg-settings get default-web-browser
firefox-esr.desktop
Get weather in your terminal
curl http://v2.wttr.in/Rouen
Name | TLDR meaning | further explanations |
---|---|---|
TLDR | Too long I didn't read | ๐ |
CLI / Promt | Command Line Interpreter / Interface en ligne de commande. Different from Graphical mouse clickable | ๐ |
Shell Linux | CLI of Linux (sh,bash,dash,csh,tcsh,zsh) | ๐ |
Java Heap | shared among all Java virtual machine threads. The heap is the runtime data area from which memory for all class instances and arrays is allocated. | ๐ |
Java Stack | Each Java virtual machine thread has a private Java virtual machine stack holding local variables and partial results, and plays a part in method invocation and return | ๐ |
Name | Description | Logo |
---|---|---|
GitHub | Biggest code hosting platform (Owned by Microsoft) | |
GitLab | Code hosting + entire DevOps lifecycle. (almost as big as github) | |
LGTM | Continuous security analysis. Pluggable with github | |
Regex101 | Regular expression tester | |
Gitter | Comunity conversation for software developpers | |
Signal | End-to-End messaging app. Used by Edward Snowden | |
Keybase | End-to-End messaging and file sharing |