Showing posts with label Software. Show all posts
Showing posts with label Software. Show all posts

Friday, December 22, 2023

Upgrading to Fedora 38

Upgraded from Fedora 37 to 38 following the standard instructions.  There were no errors indicated during the upgrade.  The packages was downloaded, installed and the system rebooted.   When I came back to the machine after the upgrade and got to the login screen, my USB mouse was functioning sporadically.  I could still log in with the keyboard but it felt slow and Fedora had a warning that something didn't load correctly.

I powered down the system and then turned in back on (the usual if the hardware isn't working first give it a "kick") and everything seemed to be working normally.  It might be because my mouse is connected to the desktop through the monitor's USB input?

Once I was back on, I did a 

sudo dnf update

to see if I was current and it gave the following error

Problem 1: cannot install the best update candidate for package libheif-freeworld-1.15.1-4.fc38.x86_64
  - nothing provides libheif(x86-64) = 1.17.5 needed by libheif-freeworld-1.17.5-1.fc38.x86_64 from rpmfusion-free-updates
 Problem 2: problem with installed package libheif-freeworld-1.15.1-4.fc38.x86_64
  - package libheif-freeworld-1.15.1-4.fc38.x86_64 from @System requires libheif(x86-64) = 1.15.1, but none of the providers can be installed
  - package libheif-freeworld-1.15.1-4.fc38.x86_64 from rpmfusion-free requires libheif(x86-64) = 1.15.1, but none of the providers can be installed
  - cannot install both libheif-1.16.2-2.fc38.x86_64 from updates and libheif-1.15.1-2.fc38.x86_64 from @System
  - cannot install both libheif-1.16.2-2.fc38.x86_64 from updates and libheif-1.15.1-2.fc38.x86_64 from fedora
  - cannot install the best update candidate for package libheif-1.15.1-2.fc38.x86_64
  - nothing provides libheif(x86-64) = 1.17.5 needed by libheif-freeworld-1.17.5-1.fc38.x86_64 from rpmfusion-free-updates
================================================================================
 Package             Arch     Version            Repository                Size
================================================================================
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 libheif             x86_64   1.16.2-2.fc38      updates                  298 k
Skipping packages with broken dependencies:
 libheif-freeworld   x86_64   1.17.5-1.fc38      rpmfusion-free-updates    59 k

Transaction Summary
================================================================================
Skip  2 Packages

Nothing to do.
Complete!
The output's suggestion of using --best and --allowerasing didn't work so search yielded two threads about this issue from months ago:
  • https://discussion.fedoraproject.org/t/unknown-update-error-with-libheif/81302
  • https://discussion.fedoraproject.org/t/rpmfusion-free-updates-libheif-freeworld-and-libheif-version-conflict/82240/7
Although the threads implied that it's been resolved and people had proposed different workarounds the simplest solution for me was simply to remove libheif-freeworld package:

sudo dnf remove libheif-freeworld

This seemed to resolve the issue.

The above threads indicated that a fix was submitted but it might now be broken again since when I searched for the various package versions on Fedora and rpmfusion, the working versions of the packages weren't there:

sudo dnf search --showduplicates libheif
Maybe it's the timing of my upgrade that ran into this problem.  Fortunately, it was a quick and easy fix but sadly I can't say that this was a completely seamless upgrade.

Monday, January 16, 2023

Goodbye OnHub!

 On January 11, 2023  Google shutdown support for the 7 years old OnHub WIFI router.  Technically, Google shutdown the Google Home App support of the OnHub because OnHub devices were technically made and sold by other companies (either ASUS or TP-Link) depending on which OnHub you had.  However, since the software was all handled by Google, if the software doesn't work then you can't do anything with it.  It will still "run" but no configuration changes can be made nor can you see any info about the device.  The only thing you can still do is to factory reset and delete the whole network.  It's unfortunate that once Google decided to stop supporting the device that they are essentially dead.

I found the OnHub to have been a great WIFI router that was powerful and easy to manage.  It had enough customization for my uses and didn't require me to be an IT admin for it. Even though there were probably a few features that I wished it added, it was balanced enough that I was willing to sacrifice those features.  I wished that before its end-of-life, Google provided a way to download its setting and load it to a new network.  Even though I replaced the OnHub with a Google Wifi (not the Nest Wifi), I had to manually put in all the setting information including tracking down all the devices that connects to it so that I can give each a recognizable name.  Other settings like groups, schedules, reserved IPs, IP range, etc. all had to be manually entered.  Most of it is pretty quick but the device naming is a pain.

Over on the OpenWRT forum there are developers trying to give the hardware additional life by replacing the software on it so it is not reliant on Google and Google's cloud.  That would be great as it saves devices from going to landfills and honestly the hardware is still perfectly fine even to this day.

Setting Up Windows with CyberPower UPS

To set up a Microsoft Windows PC to power down during a power outage does not require installing any proprietary software.  

Simply create a new power profile in Windows and go to the advance setting to set the conditions you want to power down.  



For example, the machine can be configured to power down when the battery reaches a certain point.  This works because when the CyberPower UPS is connected to the Windows machine through a USB connection, Windows will sees it as being on battery power like a laptop.  It will see how much power is left.

More details about the battery probably will need the CyberPower software but for simple shutdowns no additional software is needed.

Setting Up Fedora Linux With Cyberpower UPS

These are my notes on how I set up my Fedora Linux desktop to listen to a Cyberpower UPS to known when to shutdown when there is a power outage.  This is only for one machine connected directly to the UPS through usb and doesn't cover when there are multiple machines that needs to be shutdown.

The simplest way to configure your Linux system to work with a Cyberpower UPS is probably to use Cyberpower's PowerPanel for Linux software, but I didn't use it since I wanted to see if I can use non-proprietary software.  Instead, I used the open-source Network UPS Tools (NUT) but it required a bit of manual work to get it configured.

All I want to do is to have my PC shutdown a few minutes after the power go out of the house so I bought a Cyberpower UPS and use the direct USB connection for it to communicate with the PC.

Once connected, check that Linux can see the UPS:

# lsusb

You should see the UPS listed among the list of USB devices.

Install the NUT software:

# sudo dnf install nut nut-client

This will install NUT and there are 5 key configuration files to be aware of:

  1. ups.conf - settings for UPS driver
  2. upsd.conf - settings for UPS daemon
  3. upsd.users - access control file for the UPS daemon
  4. upsmon.conf - settings for the UPS monitoring daemon
  5. upssched.conf - settings for scheduler daemon

Once NUT is installed, you can use one of its tools to identify the UPS and it will give you info that you can use in the config files:

# sudo nut-scanner

Add the information to the /etc/ups/ups.conf

[cps1500avr]
        driver = "usbhid-ups"
        port = "auto"
        vendorid = "0764"
        productid = "0501"
        product = "ST Series"
        vendor = "CPS"
        bus = "004"

The cps1500avr can be anything you want.  It is the name you'll be using to identify the UPS.  "usbhid-ups" is the driver for Cyberpower UPS.  All of this info came from the nut-scanner.

Add a udev rule, /etc/udev/rules.d/50-usp.rules,  so that Linux understand the UPS:

SUBSYSTEM=="usb", ATTR{idVendor}=="0764", ATTR{idProduct}=="0501", GROUP="nut" TAG+="systemd", ENV{SYSTEMD_WANTS}+="nut-server.service nut-monitor.service"

The TAG+ part is something I found on this blog for an issue with the system service not seeing the UPS and thus not able to start when the system starts up. 

Reload the UDEV rule:

# sudo udevadm control --reload-rules
# sudo udevadm trigger

Test that the driver can start and then start/enable the service:

# sudo upsdrvctl start
# sudo systemctl start nut-driver-enumerator.service
# sudo systemctl enable nut-driver-enumerator.service

Now we need to add the NUT user that will be used to monitor the UPS to /etc/ups/upsd.users

[nutmon]
        password = <password>
        uspmon primary
        action = SET
        instcmds = ALL

"nutmon" can be anything you like and then you can start the server and ask it to print out the UPS info that it sees.

# sudo systemctl start nut-server.service
# sudo systemctl enable nut-server.service
# upsc cps1500avr

Edit /etc/ups/upsmon.conf by adding:

MONITOR cps1500avr@localhost 1 nutmon  <password> primary

Start and enable the monitoring service

# sudo systemctl start nut-monitor.service
# sudo system enable nut-monitor.service

Finally, we need to tell NUT what to do when certain events happens such as when the battery is low or when it notices that the power is out.  Edit /etc/ups/upssched.conf (thanks to this article for the example)

# Gives the script to run the commands for various signals
CMDSCRIPT /usr/bin/upssched-cmd

PIPEFN /var/lib/ups/upssched.pipe
LOCKFN /var/lib/ups/upssched.lock

# Send alerts immediately on change in line power
AT ONBATT * EXECUTE onbatt
AT ONLINE * EXECUTE onpower

# (Optional) Silence the beeper after 2 minutes
AT ONBATT * START-TIMER mute_beeper 120
AT ONLINE * CANCEL-TIMER mute_beeper

# Shutdown after 5  minutes on battery (5 * 60 = 300)
AT ONBATT * START-TIMER onbatt_shutdown 300

# Cancel timer if power's restored
AT ONLINE * CANCEL-TIMER onbatt_shutdown

# Battery replacement indicated by cron'd quick test
AT REPLBATT * EXECUTE replace_batt

Then edit the command script you put in (in my case it is /usr/bin/upssched-cmd):

UPS_USERNAME="nutmon"
UPS_PASSWORD="<password>"
UPS_LINK="cps1500avr@localhost"


case $1 in
    onbatt)
        # make sure beeper is enabled
        upscmd -u ${UPS_USERNAME} -p ${UPS_PASSWORD} ${UPS_LINK} beeper.enable
        # alert
        message="Power outage, on battery"
        logger -t upssched-cmd "$message"
        ;;
    onpower)
        message="Power restored"
        logger -t upssched-cmd "$message"
        ;;
    mute_beeper)
         message="(2) minute limit exceeded, muting beeper"
         upscmd -u ${UPS_USERNAME} -p ${UPS_PASSWORD} ${UPS_LINK} beeper.mute
         ;;
    onbatt_shutdown)
        message="Triggering shutdown after (5) minutes on battery"
        logger -t upssched-cmd "$message"
        /sbin/upsmon -c fsd
        ;;
    replace_batt)
        message="Quick self-test indicates battery requires replacement"
        logger -t upssched-cmd "$message"
        ;;
    *)
        logger -t upssched-cmd "Unrecognized command: $1"
        ;;
esac
Test that it will shutdown by sending it the same shutdown signal from the UPS:
sudo upsmon -c fsd

If all goes well, get the service to start on boot:

# sudo systemctl enable nut.target
# sudo systemctl enable nut-driver.target

From now on, if the power outtage lasts more then 5 minutes (i.e. the UPS is running on battery), your system will shut down.  You'll need to manually turn the system back on when the power returns.

Notes

  • I'm not sure if something automatically started and enabled the specific driver (nut-driver@cps1500avr.service) or if I did it and just forgot to note it down.
  • This article from the Archlinux wiki described a problem that sounds like the same problem that the UDEV TAG part addresses but uses a different approach.

Saturday, January 7, 2023

Kernel 6.0.16 (and 6.0.1.17?) Causes Freeze (and how to keep previous kernels)

Update 4: Kernel 6.0.18 works!

Update 3: According to users on Reddit, 6.0.17 still exhibit the problem.  One user said that 6.0.18 has the fix.

Update 2: Decided to remove the back kernel package using the dnf remove command in the previous update rather then dnf history since I only wanted to remove the kernel.  That removed the .16 kernel from the boot loader and had the .15 as the default.

sudo dnf remove $(rpm -qa | grep ^kernel | grep 16-300)

Update:  Seems like others are also affected by the same issue with a suggestion on how to remove and block the 6.0.16 kernel.  

Find and remove  6.0.16 kernel packages:

sudo dnf remove $(rpm -qa | grep ^kernel | grep 16)


Blacklist the 6.0.16 packages from getting installed again by editing /etc/dnf/dnf.conf

# Bad packages
excludepkgs=kernel*-6.0.16-300.fc37.x86_64

Tracking bugs on Fedora:


---------------------------------------------------------------------------

Today I did a typical update of my Fedora system which included a new kernel (kernel-6.0.16-300.fc37.x86_64), but when I rebooted and logged in the system will freeze.  This might be the first time (or at least it's been a long time) that I've had a Fedora upgrade not be seamless.

Fortunately, when Fedora upgrades the kernel, it keeps copies of the previous two versions so switching back simply requires watching for the GRUB boot loading screen and picking the previous kernel.  Just make sure you do this before the system automatically boots to the latest version.

I was able to determine the freeze is caused by the kernel since booting the previous kernel showed no problems after logging in.  Knowing that I have at least one working kernel, I had two choices.  Uninstall the latest kernel or keep using the old one and wait to see if any future updates fixes the problem.

I've not uninstalled a kernel on Fedora before and was unsure what would happen.  I could also simply try to revert the entire update, but that still leaves the same question.  Using dnf, you can review what was done by:

sudo dnf history

To see the details of a specific transaction, use the ID that the history command gives and use:

sudo dnf history info <id>

With dnf, you can chose to undo or rollback a transaction.  Undo will do the opposite of what was done (e.g. uninstall or reinstall the packages) of the specific reaction while rollback will undo everything between the between the transaction and the current system.

Since I didn't want to do either since this involved the kernel, I decided to just keep rebooting to the previous kernel even though it is inconvenient and hope that it gets a fix for whatever is causing the problem.

Protecting the Previous Kernel

What if it took a few more kernel updates before the problem gets fixed and the working kernel gets removed during the update?

The versionlock plugin will lock and keep the kernel on the system.

sudo dnf install python3-dnf-plugins-extras-versionlock
sudo dnf versionlock add kernel-6.0.15-300.fc37.x86_64
The delete option will let you remove the lock:
sudo dnf versionlock delete kernel-6.0.15-300.fc37.x86_64

Sunday, November 21, 2021

Upgrading to Fedora 35

Used Fedora DNF upgrade method to go from Fedora 34 to 35.  Didn't noticed any issues with the upgrade.   I decided to do the upgrade even though I just upgraded a couple of weeks ago to Fedora 34, but since I had some free time this week I decided to upgrade now when I have time to deal with any issues.  

Tuesday, November 2, 2021

Upgrading to Fedora 34

Although Fedora 35 just came out, I decided to first upgrade to Fedora 34 and give Fedora 35 a little bit of time to bake.  I've had good success using the regular Fedora DNF upgrade method, but this is the first time I've upgraded Fedora on my current machine because it was a new system so it only ever had Fedora 33 installed.

Didn't run into any immediate issues that I can tell.

Monday, November 1, 2021

Unable to access the internet when using PiHole?

PiHole added rate-limiting for DNS queries to a very low 1000 queries per minute and enabled it by default even when updating an existing installation.  To change the rate limit (or turn it off), edit /etc/pihole/pihole-FTL.conf and add/edit the line: 

RATE_LIMIT=0/0

The format is [# of queries]/[seconds] so to set a limit of 1000 queries per hour would be 1000/3600.

Wednesday, April 7, 2021

Local Go module file and Go tools magic

I really value that when working with Go there are no "hidden magic" in source code.  Go source code are essentially WYSIWYG.  You don't see decorators or dependency injections that might change the behaviors after it is compiled or run that requires you to not only have to understand the language and syntax but also having to learn additional tools' behavior on the source code.  While this is true of the language it is not true of the go command for Go's module system.

I've personally found Go modules to be more confusing then the original GOPATH.  I understand that it solves some of the complaints about GOPATH and also addresses the diamond dependency problem, but it also adds complexity to the developer workflow and under-the-hood magic. Maybe that's to be expected when it is going beyond source code management and adding a whole package management layer on top, but I'd be much happier to have to deal with this added complexity and burden if the solution was complete (how about package clean up so my mod directory isn't growing non-stop?)!

Modules adds the go.mod file that tracks all a project's dependencies and their versions.  This introduces a problem when one is developing both applications and libraries since it is possible that the developer have both the released production version and in-development version of libraries locally.  To point your application at the library without constantly changing the import path in source code, the replace directive can be used, but when committing the code it is not ideal to submit the go.mod with the replace directives in it as it will likely break the build for someone else checking out the code and can expose some privacy data (the local path that might contain the user name).

Now developers have to add the replace directives locally, remove them right before submission and then put them back (without typos!).  Fortunately, in Go 1.14, the go commands (build, clean, get, install, list, run, and test) got a new flag '-modfile' which allows developer to tell it to use an alternative go.mod file.  The allows a production version of go.mod file to not have to be modified during development/debug and a local dev version of go.mod that can be excluded from getting committed (i.e. .gitignored).  

This can be done on a per-project level by adding -modfile=go.local.mod to go [build | clean | get | install | list | run | test]:

go build -modfile=go.local.mod main.go

Note that whatever the file name is, it still has to end on .mod since the tool assumes to create a local go.sum mod based on a similar name as the local mod file except with the extension renamed from .mod to .sum.

To apply the use of go.local.mod globally, update "go env":

go env -w GOFLAGS=-modfile=go.local.mod

go env -w will write the -modfile value to where Go looks for its settings:

Defaults changed using 'go env -w' are recorded in a Go environment configuration file stored in the per-user configuration directory, as reported by os.UserConfigDir.

So the flow that Jay Conrad pointed out in this bug thread would be as follows:

  1. Copy go.mod to go.local.mod. 
  2. Add go.local.mod to .gitignore.
  3. Run go env -w GOFLAGS=-modfile=go.local.mod. This tells the go command to use that file by default.
  4. Any any replace and exclude directives or other local edits.
  5. Before submitting and in CI, make sure to test without the local file: go env -u GOFLAGS or just -modfile=. 
  6. Probably also go mod tidy.

Tuesday, April 6, 2021

Listing installed packages on Fedora with DNF

To list the packages that are user installed:

dnf history userinstalled

To list all installed packages:

dnf list installed

Friday, January 1, 2021

2021 PC - Asus PN50 4800U

Although I was very tempted to build a new desktop PC and get access to all the power goodness of the latest AMD Ryzen, I was hesitant giving up the small form factor that I had with my Shuttle PC DS87.  When the Asus PN50 with the AMD Ryzen 4800U became available I took the plunge.

The specs comparison between the previous and new PCs:

New PC:

  • Ryzen 7 4800U [Zen2] (8 cores / 16 threads, base clock 1.8GHz, max 4.2GHz - 8 GPU cores - RX Vega 8, 15W)
  • 32 GB Crucial DDR4 3200Mhz  RAM (2x16GB)
  • 1TB Samsung 970 EVO Plus (M.2 NVMe interface) SSD
  • 500GB Crucial MX500 SATA SSD (2.5")
  • Intel WIFI 6, BT 5.0

Previous PC:

  • Shuttle PC DS87
  • Intel Core i7-4790S Processor (8M Cache, base clock 3.2 GHz, max 4.0GHz, 65W)
  • Samsung 850 EVO 500GB 2.5-Inch SATA III Internal SSD (MZ-75E500B/AM)
  • 2 x Crucial 16GB Kit (8GBx2) DDR3 1600 MT/s (PC3-12800)

There are enough sites giving benchmarks so I'm not going to try to repeat what they've done, but I wanted to have something to show myself a tangible performance improvement.  It is generally during compilation when I wish things would go faster so why not compare compilation between the two systems?  The multi-core (8 vs 4) and multi-thread (16 vs 8) should benefit compilation even if the base clock of the 4800U is 1.8GHz while the i7 is 3.2GHz.  I'm expecting modern CPU is also more efficient per clock cycle then an 6 year old CPU.

I decided to time the compilation of OpenCV using the following

wget -O opencv.zip https://github.com/opencv/opencv/archive/master.zip
unzip opencv.zip 
mkdir -p build && cd build
cmake ../opencv-master/
time cmake --build .

i7 Results

real   28m57.219s
user   26m48.466s
sys     2m01.402s

4800U Results

real     36m48.166s
user     34m54.722s
sys       1m52.574s

How did this happen?  Was it that the 3.2-4.0 GHz too much for the 1.8-4.2GHz to overcome?  It did seem like during compilation all of the i7 cores was running at around 3.6 GHZ, but I suspected that the compiler was not actually taking advantage of all the cores of the 4800U.

I tried again using Ninja which automatically configures the build to use the multi-core CPUs.

make clean
cmake -GNinja ../opencv-master/
time ninja

i7 Results

real	11m28.741s
user	85m39.188s
sys	 3m23.310s

4800U Results

real      6m39.268s
user     99m03.178s
sys       4m8.597s

This result looks more like what I expected.  More of the system cycles were used on both the i7 and 4800U as more cores and threads were utilized but the real time was much shorter.  This just shows that for a lot of consumers fewer cores but faster clock speeds might be better for desktops (laptops and battery life adds another dimension) as they rely on the applications to be programmed to take advantage of the multiple cores.  That's why gamer systems usually will give up more cores for faster clock speeds since games aren't known for utilizing multiple cores.

Wednesday, April 8, 2020

BASH startup file loading order

The loading order of Bash startup files is first dependent on the type of shell that Bash think it is in.
login shell is when the user logs in from tty (not through a GUI) or logs in remotely (e.g. through ssh).
non-login shell is started in other ways such as gnome-terminal which is started by Gnome (it is a login shell that launched Gnome).
Note: The exception is OSX's Terminal.app which is treated as a login shell.
For non-login shells (e.g. gnome-terminal) the order is:

   /etc/bash.bashrc
   ~/.bashrc
For login-shells (including Terminal.app) the order is: /etc/profile and then first of:

   /etc/profile
Followed by the first of the following:

     ~/.bash_profile
     ~/.bash_login
     ~/.profile

Wednesday, December 11, 2019

Upgraded to Fedora 31

*Update: 3/23/2020* Updated everything to the latest package (including the latest Terminus font).  When restarting, the terminal will still be messed up until you switch the font to be "Terminus Medium".  Once switched to Terminus Medium it looks like the way it did before.

Upgraded to Fedora 31 but unfortunately for the first time in many releases, I've encountered problems with the upgrade.  :-(

Fedora decided to drop support for Bitmap fonts


Technically, they aren't saying they are dropping support and considers it a common problem, but users of Bitmap fonts such as Terminus will find their terminals showing garbage characters.  Fedora has instructions on how to convert Bitmap fonts to OpenType fonts but the instructions (even though it specifically use Terminus as an example) does not work.

The solution I found was to downgrade the Pango package to the Fedora 30 version (before this braking change was added):
sudo dnf downgrade --releasever 30 pango-1.43.0-4.fc30.x86_64
The downside is that its necessary to keep an eye when updating to not go back to the version that doesn't support the font.  :-(

Bugs:

https://bugzilla.redhat.com/show_bug.cgi?id=1753295
https://bugzilla.redhat.com/show_bug.cgi?id=1748495
https://bugzilla.redhat.com/show_bug.cgi?id=1754492
https://bugzilla.redhat.com/show_bug.cgi?id=1766201

This also meant that any future upgrade has to be:
sudo dnf update --exclude pango,pango-devel,nemo, nemo-extension

SMB mounting stopped working

https://bugzilla.redhat.com/show_bug.cgi?id=1768117 92

Update /etc/samba/smb.conf with:

[global]
client min protocol = NT1

Saturday, June 15, 2019

Upgrading Fedora 29 to 30

Did another upgrade from Fedora 29 to 30.  Didn't ran into any issues to report.  Very smooth upgrade process.

Update:  I noticed that Chrome now shows that it's "managed by your organization".  This message shows up if there are any chrome://policy defined.  Turns out Fedora 30 does install some policy.  To remove then:  'sudo dnf remove fedora-chromium-config'.

Monday, May 28, 2018

Backup and Restore Your Linux System and Home Directory

It's a good idea to regularly back up the files on your computer in case there is ever a hard drive failure.  OSX and Windows comes with basic backup software that is very easy to use.  Simply connect and external USB drive to your computer and enable the backup software.  Google even offers the ability to back up to your Google drive so your data is stored at a different physical location.

Linux system can be backed up very simply through the use of a command line tool called rsync and with a little scripting the backup process can be automated just like it is on OSX and Windows.

Encrypting the Backup Drive

Before backing up your files, it is a good idea to encrypt the external drive where you'll be storing your backups.  This is to prevent someone from taking the external drive and accessing your files.  Use the Disks utilities and select the drive and change it to encrypt the partition.  Note that this means that in the future only a Linux system (along with the password) can access the external drive.

Backing Up the Home Directory

I create two separate backups:  system and home.  The system is to backup my system configuration while the home directory contains the users data.  There are a lot of files and directories in the home directory that doesn't need to be backed up that might be consuming a lot of disk space.  These include cached files, trash files, etc.  It's best to exclude these files and directories. 

I found this file as a good starting point for things to exclude from the home directory.  With this files (modified to your liking), use rsync to back up the home directory (this example is to back up just your own home directory):

rsync -aP --exclude-from=[exclusion file] /home/$USER [path to backup location]
Make sure that you've mounted the external drive and have the path correct.  Use the 'n' option (e.g. rsync -naP ...) to do a dryrun where it it doesn't actually copy any files so you can test first.

If it all works correctly, a crontab to automatically repeat the process can be made although you still need to mount the drive with the encryption passphrase unless you set it up where the cron job has access to the passphrase.  I just create a bash script that I run manually.

Backing Up the System

Backing up the system is very similar except there are some additional directories to exclude and the starting point is "/" instead of the home directory.  

1.  Create a file similar to the one for excluding home directories (see above).  This file is more simple since it only contains directories:
/dev
/proc
/sys
/tmp
/run
/mnt
/media
/lost+found
/home
Then you'd run (again, use -n first to do a dry run):

sudo rsync -aAXv / --exclude-from=[file of excluded directories] [path to backup location]
Note that this time it uses the "sudo" command since some systems files requires SUDO privileges. 

Saturday, December 23, 2017

Youtube TV vs Hulu

Youtube TV and Hulu TV are two options for cord-cutters to get live TV.  After trying each for a few months, I decided to go with with Youtube TV because of its superior quality even though it had a smaller channel line-up.

Saturday, August 19, 2017

Upgrading to Fedora 26

Upgrading to Fedora 26 was very smooth. The one change I had to do was because I use the Terminus font. In Fedora 26, the font name changed so in the apps where I’ve picked Terminus, I had to change it to the new name (terminal and conky).
On one system that is connected to a TV through HDMI the login screen has the wrong refresh rate and resolution. I haven’t figured it out yet.

Tuesday, July 25, 2017

Thoughts on Hugo

posted earlier about moving from Blogger to Hugo and it has been a couple of months so I thought that I share my thoughts on how Hugo has been working out.

Thursday, June 8, 2017

Moving from Blogger to Hugo

I really liked Blogger and I’ve hosted my blog on it since 2011. It was free (still is), but still had all the essentials features for a blog at the time and I like that it was integrated with Google.
My blogging needs haven’t changed since then but the world have evolved and Blogger no longer have all the essential features necessary for a blogging platform. Specifically, I’m talking about Blogger’s lack of support for SSL/TLS for custom domains. I could accept the dated editor controls, quirkiness in the WYSIWYG UI, and limited customization of themes, but there’s no excuse for not having https enabled on a web site anymore.
I’ve now moved Lazy Hacker Babble from Blogger to Hugo + Google App Engine. Hugo is a static site generator written in Go. It takes your markdown file and generates an entire web site consisting of static files so it doesn’t require databases runtimes, extra libraries, etc. Since it’s written in Go, Hugo comes as a stand-alone binary so there is no need to install a bunch of extra software in order to run it.