Showing posts with label Software. Show all posts
Showing posts with label Software. Show all posts

Tuesday, March 25, 2025

Why the Model Context Protocol (MCP) Is Confusing

**Update 3/26/2025 ** 

OpenAI added MCP support to their agent SDK.


--------------------------------

On Nov. 25, 2024, Anthropic published a blog post about the Model Context Protocol (MCP) .  It didn’t get too much attention at the time because it wasn’t an announcement of much significance.  Suddenly in 2025, MCP has gotten a lot of hype and attention and have also caused a lot of confusion as to why there are so many YouTube face talking about it.

It was nice that Anthropic published how they connect Claude with tools in the Claude Desktop app even if the post was a bit of marketing to sell it as an standard and to encourage an open community.  There is a technical aspect (a protocol) to it, but it felt like it was a business play to get developers to extend Claude with plugins.

Large Language Models like Claude cannot perform any actions.  They’re like a brain with no body.  They might know that an email should be sent, but they can’t actually send the email.  To connect the “thought” (send email) with the action requires basic programming.  Using MCP is how Anthropic does it but it isn’t the only way.  Let’s take a look at various ways that this is accomplished and then see how MCP fits in.

You Are the Agent

In this scenario, a person goes to claude.ai and has a conversation with Claude about writing an email to invite someone to lunch.  Claude generates the email body letting the person know to copy it an email program to send.  The person manually copies that text into their email or calendar app and sends the invitation.  The person is the agent because they are performing the action.

Using an AI Assistant

Here, a person uses an app (this can be a web app, desktop app, mobile app, etc.) such as a personal assistant ala Jarvis from Iron Man.  The user asks Jarvis to send an email to invite someone for lunch.  Jarvis composes the invitation message and sends the invitation through a calendar app so that it also records the event on the calendar.  So how does Jarvis do this?

Method 1

  1. Jarvis sends your question to LLM along with prompts describing available tool(s).
  2. Jarvis looks at the response to determine what tools to use.
  3. Jarvis executes the chosen tool(s) through the tool API
  4. The results are sent back to LLM
  5. LLM formulates a natural language response
  6. The response is displayed to you!

In the early days (2023), this might be done like this:

The user speaks to Jarvis, “Jarvis, invite Thor to lunch next Wednesday.”

The Jarvis code passes the text to an LLM along with an additional prompt:

Respond to the user, but if the request is to add to a calendar then respond in JSON:

  {
    Tool: “calendar”
    Invitee: name
    Date: date
    Time: time
    Body: message
  }

The Jarvis program will get the response and if the response is a tool response it will parse the JSON and call the calendar API.

The Jarvis code calls the LLM again with the text, “tell the user that the invite was successfully sent” and then return the response to the user.

Method 2

  1. Developer registers available tools
  2. Jarvis sends your question to LLM
  3. LLM analyzes the available tools and decides which one(s) to use
  4. Jarvis executes the chosen tool(s) through the tool API
  5. The results are sent back to LLM
  6. LLM formulates a natural language response
  7. The response is displayed to you!

An enhancement was added to many LLM’s API to allow developers to register tools, their purpose and parameters. The Jarvis code will register a tool called “calendar”, give it a description such as “Tool to add, update and remove user’s calendar.”, and what parameters it needed.

Now, when Jarvis passes “Jarvis, invite Thor to lunch next week,” to the LLM, it will respond with JSON and Jarvis can call the calendar API.

The Jarvis code calls the LLM again with the text, “tell the user that the invite was successfully sent” and then return the response to the user.

Method 3 (MCP)

  1. User registers available tools.
  2. Jarvis sends your question to Claude
  3. Claude analyzes the available tools and decides which one(s) to use
  4. Jarvis executes the chosen tool(s) through the MCP server who calls the tool API
  5. The results are sent back to Claude
  6. Claude formulates a natural language response
  7. The response is displayed to you!

With MCP, the user (on desktop/mobile) or developer (on cloud) registers MCP servers with Jarvis.  Jarvis can then get the tools description from the MCP Server which it passes to the LLM.

When Jarvis passes “Jarvis, invite Thor to lunch next week,” to the LLM, the LLM will determine the tool to use.

Jarvis will then call the MCP server to send the calendar invite.

The Jarvis code calls the LLM again with the text, “tell the user that the invite was successfully sent” and then return the response to the user.

Comparison

With the MCP, tool registration is passed to the user and the tool description is handed off to the tool developer, but otherwise the steps remain the same.

Method 2

Method 3

  1. Developer registers available tools
  2. Jarvis sends your question to LLM
  3. LLM analyzes the available tools and decides which one(s) to use
  4. Jarvis executes the chosen tool(s) through the tool API
  5. The results are sent back to LLM
  6. LLM formulates a natural language response
  7. The response is displayed to you!
  1. User registers available tools.
  2. Jarvis sends your question to Claude
  3. Claude analyzes the available tools and decides which one(s) to use
  4. Jarvis executes the chosen tool(s) through the MCP server who calls the tool API
  5. The results are sent back to Claude
  6. Claude formulates a natural language response
  7. The response is displayed to you!

Comparing the different methods shows that the steps are the same but just the implementation is different.  This is one reason there’s a lot of confusion because seems to be very little benefit.

Having a standard protocol can be advantageous but only when all the LLM adopts it otherwise it is just how to interact with Claude.


MCP servers are potentially reusable and might ease integration which is a benefit since it’ll be like having only one API to learn.  This requires wide adoption and availability which isn’t a given even if it is backed by one of the big LLM providers.

Shortcomings of the MCP

As a protocol, there are a lot of shortcomings and technical benefits are minor.  

Some technical shortcomings are:

  • There’s no discovery mechanism other than manual registration of MCP servers.
  • There’s now extra MCP servers in the tech stack that can be achieved by a library.
Thus the main benefits will mainly come if the protocol is adopted as a standard.

Tuesday, February 18, 2025

How I Avoid Doomscrolling/Doomsurfing

For those not familiar with term, doomscrolling, Wikipedia describes it as:

Doomscrolling or doomsurfing is the act of spending an excessive amount of time reading large quantities of news, particularly negative news, on the web and social media. (Wikipedia)

There are negative consequences of doomscrolling on people's mental and physical health such as increased stress, anxiety, depression, isolation, etc.  Suggestions on how to break the habit and combat its negative effects include limiting the amount of screen time and seek out more positive news.   In our current environment there are numerous powerful forces working to keep people doomscrolling such as corporations prioritizing engagement (keeping you hooked), publishers vying for your attention (often through negative news), and political leaders fueling fear.

Although I don't spend much time on social media, I do regularly read the news, follow current events and various feeds on topics I'm interested in.   To avoid doomscrolling some people are able to stop following the news altogether, but I find that to be difficult to achieve for myself.  Since publishers don't provide readers much control over what is shown, I built my own news aggregation site: news.lazyhacker.com.

Now, instead of seeing what publishers want me to see:



Or these set of headlines  from feeds (which also illustrate how much political news is pushed on to us:

- Judge Chutkan rejects call from Democratic AGs for temporary restraining order blocking DOGE’s access to federal data - CNN
- Russia and US agree to work toward ending Ukraine war in a remarkable diplomatic shift - The Associated Press
- Pope Francis, still hospitalized, has pneumonia in both lungs - The Washington Post
- Fact Sheet: President Donald J. Trump Expands Access to In Vitro Fertilization (IVF) - The White House
- National Science Foundation fires roughly 10% of its workforce - NPR
- 'Executive order' cited as reason for sudden closure of JFK Library in Boston - WCVB Boston
- Ensuring Accountability for All Agencies - The White House
- Native American Activist Leonard Peltier Released From Prison - The New York Times
- Donald Trump signals Ukraine should hold elections as part of Russia peace deal - Financial Times
- Senate GOP pushes ahead with budget bill that funds Trump's mass deportations and border wall - The Associated Press
- Brazil Charges Bolsonaro With Attempting a Coup - The New York Times

I see a variety of headlines based on my own preferences:

- Pope Francis, still hospitalized, has pneumonia in both lungs - The Washington Post
- National Science Foundation fires roughly 10% of its workforce - NPR
- 'Executive order' cited as reason for sudden closure of JFK Library in Boston - WCVB Boston
- Rare deep-sea ‘doomsday fish’ washes up on Canary Islands coast - The Independent
- Hamas to release 6 more hostages, bodies of 4 others - ABC News
- Dramatic video shows moment Delta plane flipped after landing in Toronto - ABC News
- Futures Rise After S&P 500 Hits High; Two Earnings Losers Late - Investor's Business Daily
- Nvidia’s 50-series cards drop support for PhysX, impacting older games - Ars Technica
- AMD Ryzen AI Max+ 395 Analysis - Strix Halo to rival Apple M4 Pro/Max with 16 Zen 5 cores and iGPU on par with RTX 4070 Laptop - Notebookcheck.net
- Nintendo is killing its Gold Points loyalty program - Engadget
iPhone 17 Air Leaks Look More Like Google Pixel - Forbes

The source of the headlines can come from different feeds from places like Google News, Reddit, and any source that offers a RSS feed.  The site takes the headlines from the feeds and run it through a set of rules that I defined in natural language (e.g. "Remove political headlines, headlines about political figures or those who are not politicians but politically active.") to strip out any headlines that I might not want to see.  I purposely don't show any images and only update the site every couple of hours.  The former reduces the chance of me wanting to read an article because of the image rather then the substance and the latter reduces my urge to constantly refresh because I know that there will be no new headlines for another 2 hours.

Now, instead of finding myself being lured into doomscrolling, I can go to my site and see something like this:


Sunday, January 26, 2025

Upgrading from Fedora 40 to 41

 Decided it's time to upgrade from Fedora 40 to 41 and it was a smooth upgrade.  Will update if I run into any issues, but I've already been running 41 on a laptop for a couple of months so I know that 41 works for what I need and this time it was to upgrade systems that was already running an earlier version.

Wednesday, January 1, 2025

Powerline Error in VIM with Git

On Fedora 41, opening a Go file that is checked into Git results an error message that flashes briefly before VIM shows the file.  I'm not sure what is the exact cause but it went away when I uninstalled powerline and reinstalled it.   When uninstalling just powerline, it also uninstalled VIM for some reason the re-install was actually `dnf install vim powerline`.

Saturday, December 28, 2024

Framework 13 (AMD) First Impressions

I recently purchased a Framework 13 (AMD) laptop to replace my 7 years old Razer Stealth as my travel companion and wanted to share my first impressions after taking it on a trip and using it for about a month.

My main criteria for a travel laptop is that it must be light, but as I've gotten older I've added a few more things to look for:

  • Display must support good scaling.   As displays have improve their resolutions, everything shown has gotten smaller so I have to scale it up for it to be comfortable for my eyes. 
  • Replaceable battery.  As evident from using my previous laptop for 7 years and still not feeling the need to upgrade, I tend to keep my laptop for a long time especially since I don't rely on my laptop to be my primary driver.  While most parts of a laptop are capable of lasting awhile, batteries are a different story.  I've had to replace the battery on my Razer twice because they started to swell.  This is not exclusive to Razer as I've had it happen on Macbooks and Pixelbooks as well.
  • Linux support.  I mostly use Linux especially for development so I'm most comfortable with it, but I occasionally do have use for Windows (some games the family plays, camera apps, etc.).  They key reason, though, is that Windows is commercial and I don't want to be forced to pay to upgrade if it is not necessary.  The Razer Stealth ran Windows 10 and Microsoft says it's not compatible with Windows 11 so I either have to live without security updates or try to install Linux when Razer doesn't support Linux in anyway.  Having good Linux support is a way to future proof the laptop somewhat.
Given these criteria, I settled on the Framework.  It is lightweight (1.3 kg/2.9 lb) with a 13.5" 2880x1920 120Hz 2.8K matte display (the uniqueness of Framework is that it is modular so you can have different displays even if new ones are released in the future) that is a 3:2 ratio which allows better scaling especially with Linux.

The battery is replaceable (nearly everything on the Framework is replaceable/up-gradable) and it fully supports Linux (Ubuntu and Fedora being the officially supported ones but seems like most Linux distributions will work) even the finger print sensor.

First Impressions

Ordering the Framework, especially the DIY version, presents you with more choices than the typical ordering process.  Not only do you pick how much disk storage you want, you have options for what brand/specs you want and this is for all the major components:  display & bezel, keyboard, CPU,memory, storage, memory, ports.  If you're familiar with computer components, most of the parts are understandable from the description but for the keyboard it didn't explain what the difference is between US-English, International-English Linux and International-English (International English has the Euro key and Linux swaps out the Windows key with a Super key).

I was impressed how quickly the laptop shipped and arrived given it that it comes directly from Taiwan (where it was manufactured) and that each order have different sets of expansion ports (although I did pick a pretty standard set of ports).  It arrived faster then the 5-7 business days it listed when I ordered.

The packaging was nicely done to prevent anything from shifting during transit and everything is recyclable. The packaging include a screw driver needed to assemble all the components.

It took about 20 minutes to put everything together and the instructions were good.  A lot of people can probably figure it out even without the instructions, but the instructions really prepares you.  It suggests putting on the bezel starting at the bottom first and that definitely allowed it to fit better without fidgeting and it warn that the first boot will take longer so you don't worry about whether it not immediately starting up meant that you did something wrong.

It gives pretty good instructions on installing the operating system.  For Windows, it anticipated that you might not want to use a Microsoft account so it tells you how to bypass it and how to deal with the fact that during installation the laptop drivers aren't there so how do you get pass the part where Microsoft wants to have networking working just to complete the installation.  For Linux, the instructions was decent but maybe a little outdated especially with the screenshot.  Although Fedora is one of the two officially supported distributions, the Ubuntu guides seemed more comprehensive.  They also favor Gnome in their instructions.

You can get the Framework with either an Intel process/motherboard or AMD processor/motherboard and although the general sense that the AMD version performs better there's more information on the Intel stuff.

The display looks very nice with good contrast and brightness.  It's very comfortable to the eyes and scaling in Linux was not a problem.  No complaints about the touch pad and it worked with Linux out-of-the-box.  The keyboard is comfortable with good travel and spacing.  It wasn't too squishy.  If the Thinkpad is the bar, this isn't as good as that, but better then the last Macbook Pro I used . 

The fingerprint sensor worked out-of-the-box as well, but if you aren't using Gnome, you need to use the command-line tool, fprintd-enroll, to register your finger print. 

It's not clear whether Framework thinks you should run tuned-ppd to manage power profiles for both Intel and AMD or whether that's just for Intel and to stick with power-profiles-daemon for AMD.  On Fedora 41, if you install power-profiles-daemon then each time it wants to change the profile (such as when you plug/un-plug the power) SELinux will block it and give you a warning.

Although I had no problems with WIFI, the wifi chip it comes with always seem to have a weaker signal than other devices.  I think some people swap it out with the one that comes with the Intel board so it's something I'm watching out for.

I've been pretty happy with the laptop so far and hope it'll last a long time.  I like the company's mission and hope they continue to succeed with their vision of modular, sustainable and environmentally friendly laptops.

System Configuration

  • AMD Ryzen™ 5 7640U
  • 2880x1920 120Hz 2.8K matte display
  • Crucial RAM 32GB Kit (2x16GB) DDR5 5600MHz
  • WD_BLACK 2TB SN850X NVMe SSD Solid State Drive - Gen4 PCIe, M.2 2280, Up to 7,300 MB/s

Friday, December 13, 2024

Installing Windows 11 without WIFI/Network Connection

 When installing Windows 11, it assumes you have a network connection and doesn't let you proceed with the installation.  To bypass this, press Shift+F10 to open a terminal and type:

oobe\bypassnro

The laptop will restart and when it reaches the network connection page there will be a new option for no network that will allow you to continue the installation.

Friday, November 8, 2024

Raspberry Pi Downloads Public Key

Raspberry Pi public key for https://downloads.raspberrypi.org

> gpg --keyserver hkps://keyserver.ubuntu.com --recv-key 54C3DD610D9D1B4AF82A37758738CD6B956F460C

Verifying the downloaded files with the GPG signatures that should be in the same location as the file itself.

> gpg --verify abc.xyz.sig abc.xyz

Tuesday, October 22, 2024

Enabling Japanese/Chinese (international) Input on Fedora

Enabling input methods for languages other than English on Fedora requires you to manually add the language that you want yourself through the Input Method Selector (im-chooser) tool, selecting the ibus option preference and picking the input method you want to use.  I always forget this and would go to Settings > Languages or Settings > Keyboard, but neither one of those are for adding input methods. 

*** Update 3/2024 ***

The above method stopped working on Fedora 41 (Cinnamon).  It seems like the Input Method Selector (im-chooser) isn't able to talk to ibus so it won't start the ibus-daemon.  A more direct way to is to run the ibus-setup and ibus-daemon yourself.

First, add to your .bashrc the following lines:

export GTK_IM_MODULE=ibus
export XMODIFIERS=@im=ibus
export QT_IM_MODULE=ibus

Then the following commands can be used:

ibus-setup

Lets you set up the languages, short-cut keys, etc.

ibus-daemon -d

Starts the daemon in the background.

There's another system called fcitx that can be used instead of ibus.  

Saturday, September 21, 2024

Installing Nvidia and CUDA drivers on Fedora for Ollama

Ollama, a tool that lets you run Large Language Models (LLMs) on a local machine, supports the use of GPUs for better performance if you have a supported graphics card and corresponding drivers.  Having recently gotten an supported Nvidia card, I wanted to get it working with Ollama but found the available documentations on how to install the Nvidia and CUDA drivers confusing because there are multiple ways to install.  Depending on where you started your search for instructions, it can take you down different paths.

If you started on the Ollama install instruction page directs you to the Nvidia CUDA Toolkit Downloads page to have you add their CUDA repository to your Fedora instance.  From the repository you can install the CUDA toolkit, modules and drivers (CUDA and Nvidia).   For some reason, the repository currently are tagged for Fedora 37 and 39, but they seem to work for Fedora 40.  I'm not sure if that will always be the case or will work with future versions of Fedora.

If you first go to Nvidia's site to search for the driver, it will direct you to their drivers download page where you can download a .run script to install the Nvidia drivers (not CUDA).  This works but bypasses your package manager so I'm not sure if conflicts will arise in the future.  It also seems to be separate from what is in the CUDA repository so I'm not certain if there might be conflicts now or in the future.  As of this writing, installing the drivers from from the .run script and installing the CUDA toolkit from the repository does work, but I didn't install the Nvidia drivers from the repository.  

If you start with a web search or Fedora forums, the answer there is to install Nvidia from RPMFusion which has both the Nvidia and CUDA drivers.  This seems to be the most compatible version for Fedora.  If you're already using RPMFusion then it is really your only option since RPMFusion and Nvidia's repo are not compatible and will require you to do some DNF magic to get the two working together.  I also like this option because Ollama only needs the CUDA drivers and not the whole toolkit (I think you might be able to just grab the CUDA part from Nvidia's repo but their instruction directs you to download the whole toolkit).

Installing Nvidia and CUDA for Fedora

Here is how I installed a fresh new instance of Fedora 40 with Nvidia and CUDA drivers to work with Ollama.

I created a Fedora 40 Cinnamon Spin boot drive with the Fedora Media Writer and booted up the machine with it to do a clean install.  Once it finished with the installation, I rebooted the machine and set up a network connection so I can download updates and the drivers.

Open up a terminal and change the run level to 3 (multi-user command line --no GUI--)

sudo init 3

Because the first time you run sudo dnf update it'll probably update a whole bunch of the windowing systems and might cause your current window manager to crash, this avoids having the GUI and windowing system from running when you're doing the update.

Once in command-line mode, update the system with the latest packages and kernel:

sudo dnf update

Once it's been updated, reboot the system to be running on the latest kernel.  

sudo /sbin/reboot now

I went back into the level 3 since I'll be updating the graphics drivers but this time I did at the GRUB boot menu.  When the boot menu comes up, hit the 'e' key to edit and at the end of the linux line add the '3' and then CTRL-X to continue booting.   This change is not permanent. 

Install the developer tools needed to compile the modules:

sudo dnf groupinstall "Development Tools"

Now it's time to add the RPMFusion free and nonfree repos so Fedora knows where to download the drivers and modules.

You want to import the GPG key for the RPMFusion free and nonfree repos to verify that repo install packages are the actual ones:

sudo dnf install distribution-gpg-keys

sudo rpmkeys --import /usr/share/distribution-gpg-keys/rpmfusion/RPM-GPG-KEY-rpmfusion-free-fedora-$(rpm -E %fedora)

sudo rpmkeys --import /usr/share/distribution-gpg-keys/rpmfusion/RPM-GPG-KEY-rpmfusion-nonfree-fedora-$(rpm -E %fedora)

Add the repository to Fedora:

sudo dnf --setopt=localpkg_gpgcheck=1 install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm

sudo dnf --setopt=localpkg_gpgcheck=1 install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Install the Nvidia drivers with:

sudo dnf install amkod-nvidia

Make sure that to give the system time to compile the modules AFTER the package install!  

Check that the amkod-nvidia is fully built with:

modinfo -F version nvidia

Install the CUDA drivers

sudo dnf install xorg-x11-drv-nvidia-cuda

Reboot the machine again and let it automatically go to the GUI (runlevel 5).

In a terminal, check that the Nvidia driver is being used:

nvidia-smi

Now you can install Ollama which should tell you that it has Nvidia GPU support at the end of the install.

Thursday, May 30, 2024

Upgrading from Fedora 39 to 40

Upgraded to Fedora 40 following the normal procedure  hoping that it might resolve an annoying issue that started in April where SELinux kept alerting that dns_resolver is trying to setattr.  The issue is very similar to this bug except it's setting a different key.

The upgrade went without any problems but the same alerting continued.  It seems like it is caused when trying to mount an smb shared drive.  For these issues, I can usually wait for a few days and a fix is issued but in this case the bug remains open and the frequency of the alert seemed to have increased.  :-(


Saturday, April 13, 2024

Upgrading Fedora 38 to Fedora 39

 Even though Fedora 40 is coming out in a few days (or because of it), it was time to upgrade from 38 to 39.  

No immediately noticeable problem.  It downloaded the packages, installed and rebooted with no problems. 

One reason is that my Fedora 38 ran into problems with NUT being able to load the driver to the UPS.  This seemed to be because Fedora 38 had upgraded the NUT packages and there's an issue.  I noticed that Fedora 39 actually uses the previous package version and when the system came up after the upgrade, NUT was working again.

Friday, December 22, 2023

Upgrading to Fedora 38

Upgraded from Fedora 37 to 38 following the standard instructions.  There were no errors indicated during the upgrade.  The packages was downloaded, installed and the system rebooted.   When I came back to the machine after the upgrade and got to the login screen, my USB mouse was functioning sporadically.  I could still log in with the keyboard but it felt slow and Fedora had a warning that something didn't load correctly.

I powered down the system and then turned in back on (the usual if the hardware isn't working first give it a "kick") and everything seemed to be working normally.  It might be because my mouse is connected to the desktop through the monitor's USB input?

Once I was back on, I did a 

sudo dnf update

to see if I was current and it gave the following error

Problem 1: cannot install the best update candidate for package libheif-freeworld-1.15.1-4.fc38.x86_64
  - nothing provides libheif(x86-64) = 1.17.5 needed by libheif-freeworld-1.17.5-1.fc38.x86_64 from rpmfusion-free-updates
 Problem 2: problem with installed package libheif-freeworld-1.15.1-4.fc38.x86_64
  - package libheif-freeworld-1.15.1-4.fc38.x86_64 from @System requires libheif(x86-64) = 1.15.1, but none of the providers can be installed
  - package libheif-freeworld-1.15.1-4.fc38.x86_64 from rpmfusion-free requires libheif(x86-64) = 1.15.1, but none of the providers can be installed
  - cannot install both libheif-1.16.2-2.fc38.x86_64 from updates and libheif-1.15.1-2.fc38.x86_64 from @System
  - cannot install both libheif-1.16.2-2.fc38.x86_64 from updates and libheif-1.15.1-2.fc38.x86_64 from fedora
  - cannot install the best update candidate for package libheif-1.15.1-2.fc38.x86_64
  - nothing provides libheif(x86-64) = 1.17.5 needed by libheif-freeworld-1.17.5-1.fc38.x86_64 from rpmfusion-free-updates
================================================================================
 Package             Arch     Version            Repository                Size
================================================================================
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 libheif             x86_64   1.16.2-2.fc38      updates                  298 k
Skipping packages with broken dependencies:
 libheif-freeworld   x86_64   1.17.5-1.fc38      rpmfusion-free-updates    59 k

Transaction Summary
================================================================================
Skip  2 Packages

Nothing to do.
Complete!
The output's suggestion of using --best and --allowerasing didn't work so search yielded two threads about this issue from months ago:
  • https://discussion.fedoraproject.org/t/unknown-update-error-with-libheif/81302
  • https://discussion.fedoraproject.org/t/rpmfusion-free-updates-libheif-freeworld-and-libheif-version-conflict/82240/7
Although the threads implied that it's been resolved and people had proposed different workarounds the simplest solution for me was simply to remove libheif-freeworld package:

sudo dnf remove libheif-freeworld

This seemed to resolve the issue.

The above threads indicated that a fix was submitted but it might now be broken again since when I searched for the various package versions on Fedora and rpmfusion, the working versions of the packages weren't there:

sudo dnf search --showduplicates libheif
Maybe it's the timing of my upgrade that ran into this problem.  Fortunately, it was a quick and easy fix but sadly I can't say that this was a completely seamless upgrade.

Monday, January 16, 2023

Goodbye OnHub!

 On January 11, 2023  Google shutdown support for the 7 years old OnHub WIFI router.  Technically, Google shutdown the Google Home App support of the OnHub because OnHub devices were technically made and sold by other companies (either ASUS or TP-Link) depending on which OnHub you had.  However, since the software was all handled by Google, if the software doesn't work then you can't do anything with it.  It will still "run" but no configuration changes can be made nor can you see any info about the device.  The only thing you can still do is to factory reset and delete the whole network.  It's unfortunate that once Google decided to stop supporting the device that they are essentially dead.

I found the OnHub to have been a great WIFI router that was powerful and easy to manage.  It had enough customization for my uses and didn't require me to be an IT admin for it. Even though there were probably a few features that I wished it added, it was balanced enough that I was willing to sacrifice those features.  I wished that before its end-of-life, Google provided a way to download its setting and load it to a new network.  Even though I replaced the OnHub with a Google Wifi (not the Nest Wifi), I had to manually put in all the setting information including tracking down all the devices that connects to it so that I can give each a recognizable name.  Other settings like groups, schedules, reserved IPs, IP range, etc. all had to be manually entered.  Most of it is pretty quick but the device naming is a pain.

Over on the OpenWRT forum there are developers trying to give the hardware additional life by replacing the software on it so it is not reliant on Google and Google's cloud.  That would be great as it saves devices from going to landfills and honestly the hardware is still perfectly fine even to this day.

Setting Up Windows with CyberPower UPS

To set up a Microsoft Windows PC to power down during a power outage does not require installing any proprietary software.  

Simply create a new power profile in Windows and go to the advance setting to set the conditions you want to power down.  



For example, the machine can be configured to power down when the battery reaches a certain point.  This works because when the CyberPower UPS is connected to the Windows machine through a USB connection, Windows will sees it as being on battery power like a laptop.  It will see how much power is left.

More details about the battery probably will need the CyberPower software but for simple shutdowns no additional software is needed.

Setting Up Fedora Linux With Cyberpower UPS

These are my notes on how I set up my Fedora Linux desktop to listen to a Cyberpower UPS to known when to shutdown when there is a power outage.  This is only for one machine connected directly to the UPS through usb and doesn't cover when there are multiple machines that needs to be shutdown.

The simplest way to configure your Linux system to work with a Cyberpower UPS is probably to use Cyberpower's PowerPanel for Linux software, but I didn't use it since I wanted to see if I can use non-proprietary software.  Instead, I used the open-source Network UPS Tools (NUT) but it required a bit of manual work to get it configured.

All I want to do is to have my PC shutdown a few minutes after the power go out of the house so I bought a Cyberpower UPS and use the direct USB connection for it to communicate with the PC.

Once connected, check that Linux can see the UPS:

# lsusb

You should see the UPS listed among the list of USB devices.

Install the NUT software:

# sudo dnf install nut nut-client

This will install NUT and there are 5 key configuration files to be aware of:

  1. ups.conf - settings for UPS driver
  2. upsd.conf - settings for UPS daemon
  3. upsd.users - access control file for the UPS daemon
  4. upsmon.conf - settings for the UPS monitoring daemon
  5. upssched.conf - settings for scheduler daemon

Once NUT is installed, you can use one of its tools to identify the UPS and it will give you info that you can use in the config files:

# sudo nut-scanner

Add the information to the /etc/ups/ups.conf

[cps1500avr]
        driver = "usbhid-ups"
        port = "auto"
        vendorid = "0764"
        productid = "0501"
        product = "ST Series"
        vendor = "CPS"
        bus = "004"

The cps1500avr can be anything you want.  It is the name you'll be using to identify the UPS.  "usbhid-ups" is the driver for Cyberpower UPS.  All of this info came from the nut-scanner.

Add a udev rule, /etc/udev/rules.d/50-usp.rules,  so that Linux understand the UPS:

SUBSYSTEM=="usb", ATTR{idVendor}=="0764", ATTR{idProduct}=="0501", GROUP="nut" TAG+="systemd", ENV{SYSTEMD_WANTS}+="nut-server.service nut-monitor.service"

The TAG+ part is something I found on this blog for an issue with the system service not seeing the UPS and thus not able to start when the system starts up. 

Reload the UDEV rule:

# sudo udevadm control --reload-rules
# sudo udevadm trigger

Test that the driver can start and then start/enable the service:

# sudo upsdrvctl start
# sudo systemctl start nut-driver-enumerator.service
# sudo systemctl enable nut-driver-enumerator.service

Now we need to add the NUT user that will be used to monitor the UPS to /etc/ups/upsd.users

[nutmon]
        password = <password>
        uspmon primary
        action = SET
        instcmds = ALL

"nutmon" can be anything you like and then you can start the server and ask it to print out the UPS info that it sees.

# sudo systemctl start nut-server.service
# sudo systemctl enable nut-server.service
# upsc cps1500avr

Edit /etc/ups/upsmon.conf by adding:

MONITOR cps1500avr@localhost 1 nutmon  <password> primary

Start and enable the monitoring service

# sudo systemctl start nut-monitor.service
# sudo system enable nut-monitor.service

Finally, we need to tell NUT what to do when certain events happens such as when the battery is low or when it notices that the power is out.  Edit /etc/ups/upssched.conf (thanks to this article for the example)

# Gives the script to run the commands for various signals
CMDSCRIPT /usr/bin/upssched-cmd

PIPEFN /var/lib/ups/upssched.pipe
LOCKFN /var/lib/ups/upssched.lock

# Send alerts immediately on change in line power
AT ONBATT * EXECUTE onbatt
AT ONLINE * EXECUTE onpower

# (Optional) Silence the beeper after 2 minutes
AT ONBATT * START-TIMER mute_beeper 120
AT ONLINE * CANCEL-TIMER mute_beeper

# Shutdown after 5  minutes on battery (5 * 60 = 300)
AT ONBATT * START-TIMER onbatt_shutdown 300

# Cancel timer if power's restored
AT ONLINE * CANCEL-TIMER onbatt_shutdown

# Battery replacement indicated by cron'd quick test
AT REPLBATT * EXECUTE replace_batt

Then edit the command script you put in (in my case it is /usr/bin/upssched-cmd):

UPS_USERNAME="nutmon"
UPS_PASSWORD="<password>"
UPS_LINK="cps1500avr@localhost"


case $1 in
    onbatt)
        # make sure beeper is enabled
        upscmd -u ${UPS_USERNAME} -p ${UPS_PASSWORD} ${UPS_LINK} beeper.enable
        # alert
        message="Power outage, on battery"
        logger -t upssched-cmd "$message"
        ;;
    onpower)
        message="Power restored"
        logger -t upssched-cmd "$message"
        ;;
    mute_beeper)
         message="(2) minute limit exceeded, muting beeper"
         upscmd -u ${UPS_USERNAME} -p ${UPS_PASSWORD} ${UPS_LINK} beeper.mute
         ;;
    onbatt_shutdown)
        message="Triggering shutdown after (5) minutes on battery"
        logger -t upssched-cmd "$message"
        /sbin/upsmon -c fsd
        ;;
    replace_batt)
        message="Quick self-test indicates battery requires replacement"
        logger -t upssched-cmd "$message"
        ;;
    *)
        logger -t upssched-cmd "Unrecognized command: $1"
        ;;
esac
Test that it will shutdown by sending it the same shutdown signal from the UPS:
sudo upsmon -c fsd

If all goes well, get the service to start on boot:

# sudo systemctl enable nut.target
# sudo systemctl enable nut-driver.target

From now on, if the power outtage lasts more then 5 minutes (i.e. the UPS is running on battery), your system will shut down.  You'll need to manually turn the system back on when the power returns.

Notes

  • I'm not sure if something automatically started and enabled the specific driver (nut-driver@cps1500avr.service) or if I did it and just forgot to note it down.
  • This article from the Archlinux wiki described a problem that sounds like the same problem that the UDEV TAG part addresses but uses a different approach.

Saturday, January 7, 2023

Kernel 6.0.16 (and 6.0.1.17?) Causes Freeze (and how to keep previous kernels)

Update 4: Kernel 6.0.18 works!

Update 3: According to users on Reddit, 6.0.17 still exhibit the problem.  One user said that 6.0.18 has the fix.

Update 2: Decided to remove the back kernel package using the dnf remove command in the previous update rather then dnf history since I only wanted to remove the kernel.  That removed the .16 kernel from the boot loader and had the .15 as the default.

sudo dnf remove $(rpm -qa | grep ^kernel | grep 16-300)

Update:  Seems like others are also affected by the same issue with a suggestion on how to remove and block the 6.0.16 kernel.  

Find and remove  6.0.16 kernel packages:

sudo dnf remove $(rpm -qa | grep ^kernel | grep 16)


Blacklist the 6.0.16 packages from getting installed again by editing /etc/dnf/dnf.conf

# Bad packages
excludepkgs=kernel*-6.0.16-300.fc37.x86_64

Tracking bugs on Fedora:


---------------------------------------------------------------------------

Today I did a typical update of my Fedora system which included a new kernel (kernel-6.0.16-300.fc37.x86_64), but when I rebooted and logged in the system will freeze.  This might be the first time (or at least it's been a long time) that I've had a Fedora upgrade not be seamless.

Fortunately, when Fedora upgrades the kernel, it keeps copies of the previous two versions so switching back simply requires watching for the GRUB boot loading screen and picking the previous kernel.  Just make sure you do this before the system automatically boots to the latest version.

I was able to determine the freeze is caused by the kernel since booting the previous kernel showed no problems after logging in.  Knowing that I have at least one working kernel, I had two choices.  Uninstall the latest kernel or keep using the old one and wait to see if any future updates fixes the problem.

I've not uninstalled a kernel on Fedora before and was unsure what would happen.  I could also simply try to revert the entire update, but that still leaves the same question.  Using dnf, you can review what was done by:

sudo dnf history

To see the details of a specific transaction, use the ID that the history command gives and use:

sudo dnf history info <id>

With dnf, you can chose to undo or rollback a transaction.  Undo will do the opposite of what was done (e.g. uninstall or reinstall the packages) of the specific reaction while rollback will undo everything between the between the transaction and the current system.

Since I didn't want to do either since this involved the kernel, I decided to just keep rebooting to the previous kernel even though it is inconvenient and hope that it gets a fix for whatever is causing the problem.

Protecting the Previous Kernel

What if it took a few more kernel updates before the problem gets fixed and the working kernel gets removed during the update?

The versionlock plugin will lock and keep the kernel on the system.

sudo dnf install python3-dnf-plugins-extras-versionlock
sudo dnf versionlock add kernel-6.0.15-300.fc37.x86_64
The delete option will let you remove the lock:
sudo dnf versionlock delete kernel-6.0.15-300.fc37.x86_64

Sunday, November 21, 2021

Upgrading to Fedora 35

Used Fedora DNF upgrade method to go from Fedora 34 to 35.  Didn't noticed any issues with the upgrade.   I decided to do the upgrade even though I just upgraded a couple of weeks ago to Fedora 34, but since I had some free time this week I decided to upgrade now when I have time to deal with any issues.  

Tuesday, November 2, 2021

Upgrading to Fedora 34

Although Fedora 35 just came out, I decided to first upgrade to Fedora 34 and give Fedora 35 a little bit of time to bake.  I've had good success using the regular Fedora DNF upgrade method, but this is the first time I've upgraded Fedora on my current machine because it was a new system so it only ever had Fedora 33 installed.

Didn't run into any immediate issues that I can tell.

Monday, November 1, 2021

Unable to access the internet when using PiHole?

PiHole added rate-limiting for DNS queries to a very low 1000 queries per minute and enabled it by default even when updating an existing installation.  To change the rate limit (or turn it off), edit /etc/pihole/pihole-FTL.conf and add/edit the line: 

RATE_LIMIT=0/0

The format is [# of queries]/[seconds] so to set a limit of 1000 queries per hour would be 1000/3600.

Wednesday, April 7, 2021

Local Go module file and Go tools magic

I really value that when working with Go there are no "hidden magic" in source code.  Go source code are essentially WYSIWYG.  You don't see decorators or dependency injections that might change the behaviors after it is compiled or run that requires you to not only have to understand the language and syntax but also having to learn additional tools' behavior on the source code.  While this is true of the language it is not true of the go command for Go's module system.

I've personally found Go modules to be more confusing then the original GOPATH.  I understand that it solves some of the complaints about GOPATH and also addresses the diamond dependency problem, but it also adds complexity to the developer workflow and under-the-hood magic. Maybe that's to be expected when it is going beyond source code management and adding a whole package management layer on top, but I'd be much happier to have to deal with this added complexity and burden if the solution was complete (how about package clean up so my mod directory isn't growing non-stop?)!

Modules adds the go.mod file that tracks all a project's dependencies and their versions.  This introduces a problem when one is developing both applications and libraries since it is possible that the developer have both the released production version and in-development version of libraries locally.  To point your application at the library without constantly changing the import path in source code, the replace directive can be used, but when committing the code it is not ideal to submit the go.mod with the replace directives in it as it will likely break the build for someone else checking out the code and can expose some privacy data (the local path that might contain the user name).

Now developers have to add the replace directives locally, remove them right before submission and then put them back (without typos!).  Fortunately, in Go 1.14, the go commands (build, clean, get, install, list, run, and test) got a new flag '-modfile' which allows developer to tell it to use an alternative go.mod file.  The allows a production version of go.mod file to not have to be modified during development/debug and a local dev version of go.mod that can be excluded from getting committed (i.e. .gitignored).  

This can be done on a per-project level by adding -modfile=go.local.mod to go [build | clean | get | install | list | run | test]:

go build -modfile=go.local.mod main.go

Note that whatever the file name is, it still has to end on .mod since the tool assumes to create a local go.sum mod based on a similar name as the local mod file except with the extension renamed from .mod to .sum.

To apply the use of go.local.mod globally, update "go env":

go env -w GOFLAGS=-modfile=go.local.mod

go env -w will write the -modfile value to where Go looks for its settings:

Defaults changed using 'go env -w' are recorded in a Go environment configuration file stored in the per-user configuration directory, as reported by os.UserConfigDir.

So the flow that Jay Conrad pointed out in this bug thread would be as follows:

  1. Copy go.mod to go.local.mod. 
  2. Add go.local.mod to .gitignore.
  3. Run go env -w GOFLAGS=-modfile=go.local.mod. This tells the go command to use that file by default.
  4. Any any replace and exclude directives or other local edits.
  5. Before submitting and in CI, make sure to test without the local file: go env -u GOFLAGS or just -modfile=. 
  6. Probably also go mod tidy.