Recent developments with AI has brought renewed interest in command line tools and terminal apps. Linux had always had a rich collection of CLI utilities, but I wanted to check out what new options are now available. This is my running list of things I've found interesting but haven't spent enough time for me to recommend it on my tools page.
Friday, April 10, 2026
Tuesday, March 25, 2025
Why the Model Context Protocol (MCP) Is Confusing
**Update 3/26/2025 **
OpenAI added MCP support to their agent SDK.
--------------------------------
On Nov. 25, 2024, Anthropic published a blog post about the Model Context Protocol (MCP) . It didn’t get too much attention at the time because it wasn’t an announcement of much significance. Suddenly in 2025, MCP has gotten a lot of hype and attention and have also caused a lot of confusion as to why there are so many YouTube face talking about it.
It was nice that Anthropic published how they connect Claude with tools in the Claude Desktop app even if the post was a bit of marketing to sell it as an standard and to encourage an open community. There is a technical aspect (a protocol) to it, but it felt like it was a business play to get developers to extend Claude with plugins.
Large Language Models like Claude cannot perform any actions. They’re like a brain with no body. They might know that an email should be sent, but they can’t actually send the email. To connect the “thought” (send email) with the action requires basic programming. Using MCP is how Anthropic does it but it isn’t the only way. Let’s take a look at various ways that this is accomplished and then see how MCP fits in.
You Are the Agent
In this scenario, a person goes to claude.ai and has a conversation with Claude about writing an email to invite someone to lunch. Claude generates the email body letting the person know to copy it an email program to send. The person manually copies that text into their email or calendar app and sends the invitation. The person is the agent because they are performing the action.
Using an AI Assistant
Here, a person uses an app (this can be a web app, desktop app, mobile app, etc.) such as a personal assistant ala Jarvis from Iron Man. The user asks Jarvis to send an email to invite someone for lunch. Jarvis composes the invitation message and sends the invitation through a calendar app so that it also records the event on the calendar. So how does Jarvis do this?
Method 1
- Jarvis sends your question to LLM along with prompts describing available tool(s).
- Jarvis looks at the response to determine what tools to use.
- Jarvis executes the chosen tool(s) through the tool API
- The results are sent back to LLM
- LLM formulates a natural language response
- The response is displayed to you!
In the early days (2023), this might be done like this:
The user speaks to Jarvis, “Jarvis, invite Thor to lunch next Wednesday.”
The Jarvis code passes the text to an LLM along with an additional prompt:
Respond to the user, but if the request is to add to a calendar then respond in JSON:
{
Tool: “calendar”
Invitee: name
Date: date
Time: time
Body: message
}The Jarvis program will get the response and if the response is a tool response it will parse the JSON and call the calendar API.
The Jarvis code calls the LLM again with the text, “tell the user that the invite was successfully sent” and then return the response to the user.
Method 2
- Developer registers available tools
- Jarvis sends your question to LLM
- LLM analyzes the available tools and decides which one(s) to use
- Jarvis executes the chosen tool(s) through the tool API
- The results are sent back to LLM
- LLM formulates a natural language response
- The response is displayed to you!
An enhancement was added to many LLM’s API to allow developers to register tools, their purpose and parameters. The Jarvis code will register a tool called “calendar”, give it a description such as “Tool to add, update and remove user’s calendar.”, and what parameters it needed.
Now, when Jarvis passes “Jarvis, invite Thor to lunch next week,” to the LLM, it will respond with JSON and Jarvis can call the calendar API.
The Jarvis code calls the LLM again with the text, “tell the user that the invite was successfully sent” and then return the response to the user.
Method 3 (MCP)
- User registers available tools.
- Jarvis sends your question to Claude
- Claude analyzes the available tools and decides which one(s) to use
- Jarvis executes the chosen tool(s) through the MCP server who calls the tool API
- The results are sent back to Claude
- Claude formulates a natural language response
- The response is displayed to you!
With MCP, the user (on desktop/mobile) or developer (on cloud) registers MCP servers with Jarvis. Jarvis can then get the tools description from the MCP Server which it passes to the LLM.
When Jarvis passes “Jarvis, invite Thor to lunch next week,” to the LLM, the LLM will determine the tool to use.
Jarvis will then call the MCP server to send the calendar invite.
The Jarvis code calls the LLM again with the text, “tell the user that the invite was successfully sent” and then return the response to the user.
Comparison
With the MCP, tool registration is passed to the user and the tool description is handed off to the tool developer, but otherwise the steps remain the same.
|
Method 2 |
Method 3 |
|
|
Comparing the different methods shows that the steps are the same but just the implementation is different. This is one reason there’s a lot of confusion because seems to be very little benefit.
Having a standard protocol can be advantageous but only when all the LLM adopts it otherwise it is just how to interact with Claude.
MCP servers are potentially reusable and might ease integration which is a benefit since it’ll be like having only one API to learn. This requires wide adoption and availability which isn’t a given even if it is backed by one of the big LLM providers.
Shortcomings of the MCP
As a protocol, there are a lot of shortcomings and technical benefits are minor.
Some technical shortcomings are:
- There’s no discovery mechanism other than manual registration of MCP servers.
- There’s now extra MCP servers in the tech stack that can be achieved by a library.
Saturday, February 15, 2025
WebAssembly (WASM) with Go (Golang) Basic Example
I first wrote about using Go for WebAssembly (WASM) 6 years ago right before the release of Go 1.11 which was the first time Go supported compiling to WASM. Go's initial support for WASM had many limitations (some I listed in my initial article) which have since been addressed so I decided to revisit the topic with some updated example of using Go for WASM.
Being able to compile code to WASM now allow:
- Go programs to run in the browser.
- Go functions to be called by JavaScript in the browser.
- Go code to call JavaScript functions through syscall/js.
- Go code access to the DOM.
Setup
Go's official Wiki now has an article on the basics of using Go for WASM including how to set the compile target and setup.
A quick summary of the steps to the process:
Compile to WASM with the output file ending as wasm since it's likely that the mime type set in /etc/mime probably use the wasm extension.
> GOOS=js GOARCH=wasm go build -o <filename>.wasmCopy the JavaScript support file to your working directory (wherever your http server will serve it from). It's necessary to use the matching wasm_exec.js for the version of the Go being used so maybe put this as part of the build script.
> cp "$(go env GOROOT)/lib/wasm/wasm_exec.js" .Then add the following to the html file to load the WASM binary:
<script src="wasm_exec.js"></script>
<script>
const go = new Go();
WebAssembly.instantiateStreaming(fetch("main.wasm"), go.importObject).then((result) => {
go.run(result.instance);
});
</script>Keeping It Running
It is a good starting point but the Go code example is too simplistic. It only demonstrates that the WASM binary that is created by Go can be loaded by having write a line to the browser's console. The Go program basically gets loaded by the browser, prints a line and exits. Most of the time, it's probably desirable to have the WASM binary get loaded and stay running. This can be achieved by either having having a channel that keeps waiting:
c := make(chan struct{}, 0)
<- cor even easier:
select {}Have either of these at the end of main() will keep the program alive after it gets loaded into the browser.
In Place of JavaScript
Being able to access the DOM excites me the most because it allows me to avoid writing JavaScript followed by being able to run Go programs in the browsers. While I think the inter-op between Go and JavaScript is probably the most practical application, it's not something I've had to do much since I'm not a front-end developing doing optimizations or trying to reuse Go code between the front-end and back-end.
I don't mind using HTML for UI development or even CSS, I'm just personally not a fan of JavaScript. This isn't to say that it is bad, just I prefer other languages just like some people prefer C++, Java, Python, etc. I don't have fun writing JavaScript like I do with with Go if though I know JavaScript.
Take a basic example of a web app (index.html) with a button to illustrate:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Example</title>
</head>
<body>
<button id="myButton">Click Me</button>
</body>
</html>JavaScript is used to attach an event to it so that when the button is clicked, an alert message pops up:
// Select the button element
const button = document.getElementById('myButton');
// Attach an event listener to the button
button.addEventListener('click', function() {
alert('Button clicked!');
});
With WASM, the JavaScript can be replaced with Go code:
package main
import (
"honnef.co/go/js/dom/v2"
)
func main() {
document := dom.GetWindow().Document()
// Select the button element
button := document.GetElementByID("myButton")
// Attach an event listener to the button
button.AddEventListener("click", false, func(dom.Event) {
dom.GetWindow().Alert("Button clicked!")
})
select {}
}In this case, the Go code looks extremely similar to the JavaScript code because I'm using the honnef.co/go/js/dom/v2 package. It is a Go binding to the JavaScript DOM APIs and I find that it makes it more convenient than using syscall/js directly.
Why do I prefer this over writing JavaScript especially when they look so similar? The main reason is that most code is not just calling an API. There's other logic that are implemented and for those, I can use Go and Go's libraries along with the benefits of a compiled and type safe language.
There are still things that needs to be considered before just using Go and WASM for general consumer production web app. The binary size can be large so it needs to be considered for your audience, but if I'm doing my own hobby project for myself or corporate apps where I know the user have fast connections, or if app performance and functionality outweighs the initial download and memory usage, I'd try to use it.
Monday, February 10, 2025
The Death of Software Engineering Is Premature
There is a lot to be excited about when it comes to advancement in language models. The people that benefit the most are software engineers because it can enhance the productivity of knowledgeable engineers.
While a simple prompt such as "build me a web app with a conversation interface," will result in a functional web app (this is great because building a basic commodity application is now accessible to everyone), these aren't the type of applications software engineers are normally tasked to write. Software engineers are tasked with building multi-layered and evolving software that can't be fully described with a few prompts. To build the software that businesses need requires skilled and knowledgeable people to direct the engineering work of the AI.
I built a simple app to filter news headlines that is small enough to be digestible in a post that I think shows how far a layman can go with building software using LLMs and why software engineers are still needed.
The First Prompt
Let's start with what many executives might think makes software engineers unnecessary:
Build me an app that will filter out political articles from Google News.
ChatGPT understood enough to generate an app using keywords to filter the headlines. One time it created a React app and a second time it used Python. Both times it used newsapi.org to get the news feeds and require you to understand how to build and run the app. The main issue is that the app isn't really what I wanted. It provides news search which the results are then matched with keywords to decide what to filter out. I wanted the news that I normally see when I visit news.google.com minus the political articles so I tell ChatGPT precisely that:
I don't want to search for news. I want to see the articles that I see when visiting news.google.com minus political articles
The first time I asked ChatGPT it understood this enough to switch to using the Google News RSS feed which is excellent\! The second time, it concluded that the way is to scrape news.google.com. Both of these prompts highlight that some specialized knowledge is needed. Does the programming language matter? Should it use RSS or web scraping? How do you run the code it generates? Can these questions be ignored? Who in the organization does the CEO expect to be able to answer these questions?
The Second Prompt
While the CEO might not be able to use the AI, it is possible that a non-engineer or someone who doesn't know how to program can know enough to give more details in the prompt that improves on the first prompt. A technical product manager could give the AI the extra details:
- single page web app using material design
- have a header with the company logo on the left
- have a settings menu on the right side of the header
- ...
- web app will get the list of headlines to display
- pull headlines from RSS feeds and have a LLM return the items that are not political
- build it with language X on the backend and basic Javascript and CSS on the front end.
(4) was included to demonstrate how software often needs to fit in with a business' existing infrastructure. If the AI returns a React app but there is no infrastructure to build and deploy React apps then we'd be looking at additional costs and efforts to add that ability simply because the AI had no knowledge of what is reasonable for a specific business.
LLMs are capable of generating this app although the generated code didn't actually work and I had to guide it to how to fix things but for now let's assume that the LLM generated a fully working app.
If we were to stop here then we might conclude that software engineers aren't needed anymore, but I believe the conclusion is actually that for companies that previously did not have engineers that they now have access to some skills their organization never had before. For example, a small company who has an office manager built a spreadsheet that they're running their business on would benefit because while their spreadsheet worked it is still limited. The next step to expand the spreadsheet that would previously require some coding knowledge to extend the worksheet can now be done by the office manager with the aid of an AI. There is cost saving for the company because they didn't have to hire a consultant (these companies probably don't have enough work to hire a full time engineer to be on staff).
Remember that I left out a number of things that the AI cannot do including turning the code into an app and deploying the app, but the biggest hurdle would be if the AI did not meet all the requirements for the app initially. I had success prompting the AI to make certain changes and enhancements but multiple times the AI would make a change that didn't work and go further-and-further down the rabbit hole and no prompting got it to fix the problem until I have it very specific instructions on how to fix ("In function X, change the max/min values to be Y for variable V").
Basically, maintaining, fixing and enhancing the app is where the challenge is and that's where most engineers are spending their time.
The Third Prompt
Prompt 3 (in real life was actually what I did first) was the fastest and most productive way towards building a working application and this was to have a design and give the AI many specific implementation instructions. I knew the architecture, algorithm and code structure and used that knowledge to guide the AI so that it was essentially very fast typist of code:
- Write a FetchRSS function that takes in a string value for the RSS feed configuration file path.
- In FetchRSS, open up the configuration file and loop through each line to get the URL.
- For each URL, fetch it and parse the RSS response into a slice of strings
- Oh, ignore any lines that are blank or starts with '\#'
- ...
Since I enjoyed typing out code, I tended to write my own stuff but for the "boring" codes (e.g. handling when err \!= nil) I'll have the AI write it.
I was able to complete the code much faster, make fewer trips to look up references and documentation and with few bugs from typos.
Here's the problem, though. While AI is capable of generating valid code, it isn't "working" code. It isn't code that can be used directly in a business' infrastructure. AI is still struggling to understand an entire code base and writing code that works within a business requires also understanding the environment the code is running in. If the code will be run in complete isolation then it might be able to run but even a simple function such as "GetUserName" depends on where the user name is stored? What integration must the AI be aware of in order to get the user name? In a real environment, the AI simply just gives code snippets that the engineer must still adapt into the organization's infrastructure.
Conclusion
Having an AI capable of building software for your business is not realistic. My example app is too simplistic to be any company's product and it still wasn't able to do that.
Having a knowledgeable person on staff using AI will increase your staff's ability to do their jobs better and allow companies to do things they previously couldn't but most of these things are not things software engineers typically do. If companies try to have non-engineers do software engineering work with the AI will likely result in decreased productivity. Any gains from having something done quickly at the beginning will also be quickly overshadowed by the inability to maintain and enhance the software.
It is ultimately a decision companies have to make for themselves. Is a % reduction in compensating engineers a greater value then expanding by factors the productivity and capabilities of the engineers (reduce cost vs growth)?
What is complex today will not be as complex tomorrow. They will become common and AI will be able to take care of those things, but as AI takes over more of the common stuff that will allow engineers to tackle the next level of complexity with more originality because businesses will need it to survive. There are leaders at technology companies boasting about being able to get rid of their software engineers or how they will stop hiring more engineers. These leaders are making a choice to make cheap commodity products in exchange for growth and innovation, but they might find themselves racing to the bottom instead of accelerating to the top.
Can AI someday outpace people? Maybe. But it's not now. Declaring that engineers aren't needed is pre-mature.
Using AI/LLM to Implement a News Headline Filter
Remove all political headlines. Headlines that includes political figures, and celebrities who are active in politics such as Elon Musk should also be removed.
Wednesday, January 1, 2025
Powerline Error in VIM with Git
On Fedora 41, opening a Go file that is checked into Git results an error message that flashes briefly before VIM shows the file. I'm not sure what is the exact cause but it went away when I uninstalled powerline and reinstalled it. When uninstalling just powerline, it also uninstalled VIM for some reason the re-install was actually `dnf install vim powerline`.
Saturday, December 28, 2024
Framework 13 (AMD) First Impressions
I recently purchased a Framework 13 (AMD) laptop to replace my 7 years old Razer Stealth as my travel companion and wanted to share my first impressions after taking it on a trip and using it for about a month.
My main criteria for a travel laptop is that it must be light, but as I've gotten older I've added a few more things to look for:
- Display must support good scaling. As displays have improve their resolutions, everything shown has gotten smaller so I have to scale it up for it to be comfortable for my eyes.
- Replaceable battery. As evident from using my previous laptop for 7 years and still not feeling the need to upgrade, I tend to keep my laptop for a long time especially since I don't rely on my laptop to be my primary driver. While most parts of a laptop are capable of lasting awhile, batteries are a different story. I've had to replace the battery on my Razer twice because they started to swell. This is not exclusive to Razer as I've had it happen on Macbooks and Pixelbooks as well.
- Linux support. I mostly use Linux especially for development so I'm most comfortable with it, but I occasionally do have use for Windows (some games the family plays, camera apps, etc.). They key reason, though, is that Windows is commercial and I don't want to be forced to pay to upgrade if it is not necessary. The Razer Stealth ran Windows 10 and Microsoft says it's not compatible with Windows 11 so I either have to live without security updates or try to install Linux when Razer doesn't support Linux in anyway. Having good Linux support is a way to future proof the laptop somewhat.
First Impressions
System Configuration
- AMD Ryzen™ 5 7640U
- 2880x1920 120Hz 2.8K matte display
- Crucial RAM 32GB Kit (2x16GB) DDR5 5600MHz
- WD_BLACK 2TB SN850X NVMe SSD Solid State Drive - Gen4 PCIe, M.2 2280, Up to 7,300 MB/s
Thursday, January 5, 2023
Add Build info to a Go Binary
Having the build info directly in a binary is useful in helping to identify one binary from aonther especially if you do a lot of compilations. Manually updating that information in to the source before build is cumbersome and error prone so it's better to automate it.
This can be done in Go using -ldflags with the build command. For example, if you have a main.go file such as this:
package main
import "fmt"
var(
build string
)
func main() {
fmt.Printf("build date: %v\n", build)
)
Then you can build it with -ldflags to change the value of build with the current date when using the go build command:
go build -ldflags "-X main.build=`date`" main.goBe careful that other parts of your program doesn't change the value during runtime since it is just a variable.
To make it a little safer, you can put the variables into another package and don't allow it to be updated. You can, for example, create a package called "buildinfo"
package buildinfo
var (
builddate = "0"
)
func GetBuild() string {
return builddate
}
that is called by your main.go:
package main
import (
"fmt"
"example/buildinfo"
)
func main() {
fmt.Printf("build date: %v\n", build.GetBuild())
}You will then build your application with:
go build -ldflags="-X 'example/buildinfo.builddate=`date`'"
Running the program will now output something like this:
build date: Thu Jan 5 12:33:38 PM
Friday, April 9, 2021
Keep Go Module Directory Clean with GOMODCACHE
Go makes downloading projects and their dependencies very easy. In the beginning there was go get which will download the project source code and its dependencies to $GOPATH/src. With modules, all the dependencies are downloaded to $GOPATH/pkg/mod. The ease of downloading and the lack of management control in the go command means that is easy for the two directories to grow in size and to lose track of which project led to the download of a particular package.
I recently started to play around with the Fyne UI toolkit. I didn't initially know what other packages it would download so I wanted to have Fyne and its dependencies in their own area. The go command has a flag -pkgdir that is shared by the various commands.
The build flags are shared by the build, clean, get, install, list, run, and test commands:
...
-pkgdir dir
install and load all packages from dir instead of the usual locations. For example, when building with a non-standard configuration, use -pkgdir to keep generated packages in a separate location.
This didn't work as I expected because it didn't seem like it did anything at all. Using the command
go build -pkgdir /tmpresulted in all the downloaded package still going to $GOPATH/pkg/mod.
What did work (thanks to seankhliao) is to set the GOMODCACHE variable which sets more then the cache location but also the package location:
GOMODCACHE=/tmp go buildAll the downloaded dependency packages will now be downloaded to /tmp rather then $GOPATH/pkg/mod.
Honestly, I'm not really sure what -pkgdir is really suppose to do. Maybe it is only for things that the build command generates? Why does it do when using with go get?
Wednesday, April 7, 2021
Local Go module file and Go tools magic
I really value that when working with Go there are no "hidden magic" in source code. Go source code are essentially WYSIWYG. You don't see decorators or dependency injections that might change the behaviors after it is compiled or run that requires you to not only have to understand the language and syntax but also having to learn additional tools' behavior on the source code. While this is true of the language it is not true of the go command for Go's module system.
I've personally found Go modules to be more confusing then the original GOPATH. I understand that it solves some of the complaints about GOPATH and also addresses the diamond dependency problem, but it also adds complexity to the developer workflow and under-the-hood magic. Maybe that's to be expected when it is going beyond source code management and adding a whole package management layer on top, but I'd be much happier to have to deal with this added complexity and burden if the solution was complete (how about package clean up so my mod directory isn't growing non-stop?)!
Modules adds the go.mod file that tracks all a project's dependencies and their versions. This introduces a problem when one is developing both applications and libraries since it is possible that the developer have both the released production version and in-development version of libraries locally. To point your application at the library without constantly changing the import path in source code, the replace directive can be used, but when committing the code it is not ideal to submit the go.mod with the replace directives in it as it will likely break the build for someone else checking out the code and can expose some privacy data (the local path that might contain the user name).
Now developers have to add the replace directives locally, remove them right before submission and then put them back (without typos!). Fortunately, in Go 1.14, the go commands (build, clean, get, install, list, run, and test) got a new flag '-modfile' which allows developer to tell it to use an alternative go.mod file. The allows a production version of go.mod file to not have to be modified during development/debug and a local dev version of go.mod that can be excluded from getting committed (i.e. .gitignored).
This can be done on a per-project level by adding -modfile=go.local.mod to go [build | clean | get | install | list | run | test]:
go build -modfile=go.local.mod main.goNote that whatever the file name is, it still has to end on .mod since the tool assumes to create a local go.sum mod based on a similar name as the local mod file except with the extension renamed from .mod to .sum.
To apply the use of go.local.mod globally, update "go env":
go env -w GOFLAGS=-modfile=go.local.modgo env -w will write the -modfile value to where Go looks for its settings:
Defaults changed using 'go env -w' are recorded in a Go environment configuration file stored in the per-user configuration directory, as reported by os.UserConfigDir.
So the flow that Jay Conrad pointed out in this bug thread would be as follows:
- Copy go.mod to go.local.mod.
- Add go.local.mod to .gitignore.
- Run go env -w GOFLAGS=-modfile=go.local.mod. This tells the go command to use that file by default.
- Any any replace and exclude directives or other local edits.
- Before submitting and in CI, make sure to test without the local file: go env -u GOFLAGS or just -modfile=.
- Probably also go mod tidy.
Tuesday, April 6, 2021
Listing installed packages on Fedora with DNF
To list the packages that are user installed:
dnf history userinstalledTo list all installed packages:
dnf list installed
Sunday, January 17, 2021
My Systems (2021)
Updated 8/7/2025 - Self built daily driver
Updated 5/21/2023 with Beelink system
2021 brings upgrades to the computers in the house that has been fairly static over the past 7-8 years. I got a couple of new systems and repurposed some parts from the old systems so this post is mainly to inventory the new configurations for my own reference.
Daily Driver (Self-Built)
- AMD Ryzen 7 7700X CPU (8-core, 16-threads, 4.5GHz base, 5.5Ghz Max Boost)
- ASUS B650M-PLUS WIFI AM5 Motherboard
- Gigabyte GeForce RTX 4070 Ti Super with 16GB GPU
- Corsair Vengeance DDR5 64GB Memory
- be quiet! Pure Rock 2 CPU Cooler
- Corsair RM750e (2023) Power Supply
- Corsair 4000D Airflow Mid-Tower ATX Case
- Crucial P3 Plus 2TB M.2 SSD
- 2 Dell U2424H
- 1 Dell U2421HE
- Dell AC511M soundbar
- Unicomp Ultra Classic keyboard (2009)
- The deprecation of Windows 10 meant that my kids will need a new PC that can run Windows 11.
- The use of local LLMs requires a GPU.
Asus PN50 4800U
- Ryzen 7 4800U [Zen2] (8 cores / 16 threads, base clock 1.8GHz, max 4.2GHz - 8 GPU cores - RX Vega 8, 15W)
- 32 GB Crucial DDR4 3200Mhz RAM (2x16GB)
- 1TB Samsung 970 EVO Plus (M.2 NVMe interface) SSD
- 500GB Crucial MX500 SATA SSD (2.5")
- Intel WIFI 6, BT 5.0
- It has no power button so it turns on when the PC turns on. To use the audio-in jack and the speaker means the PC must be turned on.
- The speaker has a hiss to it like many speakers but with no power button the hiss is always there. I had to plug something into the headphone jack so I don't hear it.
- When something is plugged into the audio-in jack no audio goes through the USB. If there are two audio sources (e.g. PC and music player) they need to share a connection. I have two PCs connected to the monitor (one on display port and one on hdmi) and I can't have one play through USB and one through the audio in without plugging-and-unplugging the audio-in cable. Instead, I have a cable from the monitor's audio-out to the soundbar's audio-in and each machine plays through the DP/HDMI outputs.
Asus PN50 4500U
- Ryzen 5 4500U [Zen2] (6 cores / 6 threads, base clock 2.3GHz, max 4.0GHz - 6 GPU cores - RX Vega 6, 15W)
- 2x 8GB 3200 DDR4 so-dimm by SK hynix
- Intel 660p Series m.2 500GB SSD
- Intel WI-FI 6 (GIG+) + BT 5.0
- *Crucial 128BG m4 2.5" SSD
System 3 (Shuttle DS87)
- Shuttle PC DS87
- Intel Core i7-4790S Processor (4 cores / 8 threads, 8M Cache, base clock 3.2 GHz, max 4.0GHz, 65W)
- Samsung 850 EVO 500GB 2.5-Inch SATA III Internal SSD (MZ-75E500B/AM)
- 2 x Crucial 16GB Kit (8GBx2) DDR3 1600 MT/s (PC3-12800)
- *Intel Network 7260.HMWG WiFi Wireless-AC 7260 H/T Dual Band 2x2 AC+Bluetooth HMC
- *Samsung 840 EVO Series 120GB mSATA3 SSD
- Ryzen 3 4300U [Zen2] (4 cores / 4 threads, base clock 2.7GHz, max 3.7GHz - 5 GPU cores - RX Vega 5, 15W)
- 16 GB Crucial (CT8G4SFRA32A) DDR4 3200Mhz RAM (2x8 GB)
- 500GB Samsung 970 EVO Plus (M.2 NVMe interface) SSD
This system is meant to be a more portable system for when I'm working at another location. I paired this up with a portable monitor rather then getting a laptop since I don't need this to be a mobile system but one that I can easily transport.
- Ryzen 5 5500U (6 cores / 12 threads, base clock 2.1GHz, max 4.0 GHz, 7 core GPU, @ 1800 MHz, 15W TDP)
- 16 GB DDR4
- 500GB NVME M.2 SSD
- WiFi 6
- BT 5.2
System 4 (Shuttle XH61)
- Intel Core i7-2600S Processor (4 cores / 8 threads, 8M Cache, base clock 2.8 GHz, max 3.8GHz, 65W)
*Seagate 300GB 7200RPM HDDCosair MX500 CT500MX500SSD1 500GB 2.5in SATA 6Gbps SSD- TP-Link USB WiFi Adapter for Desktop PC, AC1300Mbps USB 3.0 WiFi Dual Band Network Adapter with 2.4GHz/5GHz High Gain Antenna, MU-MIMO
- 8GB RAM
ASUS VivoMINI UN62
- Intel i3-4030U (2 cores / 4 threads, 1.9 GHz, 3 MB cache, 15W)
- 16GB Crucial (2x8 GB DDR3-1600) 204-pin sodimm
- Samsung 840 EVO 128GB msata3 SDD
- Intel Network 7260.HMWG WiFi Wireless-AC 7260 H/T Dual Band 2x2 AC+Bluetooth HMC
Raspberry Pi 4
- Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
- 4G BLPDDR4-3200 SDRAM
- 2.4 GHz and 5.0 GHz IEEE 802.11ac wireless, Bluetooth 5.0, BLE
- Gigabit Ethernet
- 2 USB 3.0 ports; 2 USB 2.0 ports.
- Raspberry Pi standard 40 pin GPIO header (fully backwards compatible with previous boards)
- 2 × micro-HDMI ports (up to 4kp60 supported)
- 2-lane MIPI DSI display port
- 2-lane MIPI CSI camera port
- 4-pole stereo audio and composite video port
- H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode)
- OpenGL ES 3.0 graphics
- Micro-SD card slot for loading operating system and data storage
- 5V DC via USB-C connector (minimum 3A*)
- 5V DC via GPIO header (minimum 3A*)
- Power over Ethernet (PoE) enabled (requires separate PoE HAT)
- Operating temperature: 0 – 50 degrees C ambient
- Raspberry Pi ICE Tower Cooler, RGB Cooling Fan (excessive but looks cool on the desk).
Friday, January 1, 2021
2021 PC - Asus PN50 4800U
Although I was very tempted to build a new desktop PC and get access to all the power goodness of the latest AMD Ryzen, I was hesitant giving up the small form factor that I had with my Shuttle PC DS87. When the Asus PN50 with the AMD Ryzen 4800U became available I took the plunge.
The specs comparison between the previous and new PCs:
New PC:
- Ryzen 7 4800U [Zen2] (8 cores / 16 threads, base clock 1.8GHz, max 4.2GHz - 8 GPU cores - RX Vega 8, 15W)
- 32 GB Crucial DDR4 3200Mhz RAM (2x16GB)
- 1TB Samsung 970 EVO Plus (M.2 NVMe interface) SSD
- 500GB Crucial MX500 SATA SSD (2.5")
- Intel WIFI 6, BT 5.0
Previous PC:
- Shuttle PC DS87
- Intel Core i7-4790S Processor (8M Cache, base clock 3.2 GHz, max 4.0GHz, 65W)
- Samsung 850 EVO 500GB 2.5-Inch SATA III Internal SSD (MZ-75E500B/AM)
- 2 x Crucial 16GB Kit (8GBx2) DDR3 1600 MT/s (PC3-12800)
There are enough sites giving benchmarks so I'm not going to try to repeat what they've done, but I wanted to have something to show myself a tangible performance improvement. It is generally during compilation when I wish things would go faster so why not compare compilation between the two systems? The multi-core (8 vs 4) and multi-thread (16 vs 8) should benefit compilation even if the base clock of the 4800U is 1.8GHz while the i7 is 3.2GHz. I'm expecting modern CPU is also more efficient per clock cycle then an 6 year old CPU.
I decided to time the compilation of OpenCV using the following
wget -O opencv.zip https://github.com/opencv/opencv/archive/master.zip unzip opencv.zip mkdir -p build && cd build cmake ../opencv-master/ time cmake --build .
i7 Results
real 28m57.219s user 26m48.466s sys 2m01.402s
4800U Results
real 36m48.166s user 34m54.722s sys 1m52.574s
How did this happen? Was it that the 3.2-4.0 GHz too much for the 1.8-4.2GHz to overcome? It did seem like during compilation all of the i7 cores was running at around 3.6 GHZ, but I suspected that the compiler was not actually taking advantage of all the cores of the 4800U.
I tried again using Ninja which automatically configures the build to use the multi-core CPUs.
make clean cmake -GNinja ../opencv-master/ time ninja
i7 Results
real 11m28.741s
user 85m39.188s
sys 3m23.310s4800U Results
real 6m39.268s
user 99m03.178s
sys 4m8.597sThis result looks more like what I expected. More of the system cycles were used on both the i7 and 4800U as more cores and threads were utilized but the real time was much shorter. This just shows that for a lot of consumers fewer cores but faster clock speeds might be better for desktops (laptops and battery life adds another dimension) as they rely on the applications to be programmed to take advantage of the multiple cores. That's why gamer systems usually will give up more cores for faster clock speeds since games aren't known for utilizing multiple cores.
Friday, November 27, 2020
Building GUI applications with Go (Golang)
TLDR;
- command-line only ~6M
- command-line + GTK GUI ~9M
- command-line + Fyne GUI ~14M
Choices
- A number of GUI projects have been abandoned.
- There is no single "blessed" GUI framework from the Go team.
- There are no fully native Go implementation of a GUI toolkit.
- Can app developers everything in Go? Can it be written in an idiomatic way?
- Can the entire project just depend on the Go tool chain?
- Is the whole tech stack built with Go?
Gio (https://gioui.org)
- It lacked documentation to help a new user understand how to use it and I found the existing documentation to poorly organized. It primarily relies on code examples and API comments and then leaves it to the users to figure out for themselves how to use it.
- While immediate mode gives more control to the developers, building GUIs is one area where there's enough complexity that I don't necessarily mind handing it off to a toolkit to take care of things. I wonder whether immediate mode is actually more useful to developers who build GUI tool kits then developers who use the tool kits.
Fyne (https://fyne.io)
Microsoft Windows
- C:\msys2\mingw64\bin
- C:\msys2\usr\bin




