![](/images/2024-10-25T090738.png)
The best approach to mocking is to mock the object where it is used, not where it is defined. -
Away From Keyboard | |||||
Home | About | Posts | Tags | Projects | RSS |
Mocking Best Practices | Updated | ||
---|---|---|---|
Words | 127 | ||
Tags | Read | 1 minute |
The best approach to mocking is to mock the object where it is used, not where it is defined. -
build your own workflow | Updated | ||
---|---|---|---|
Words | 429 | ||
Tags | Read | 3 minutes |
Emacs is all about tailoring the text editor to fit your own needs, and Lisp program language is all about building the right abstraction for the problem. After years of using Emacs/writing Emacs lisp, one lesson stands out: whenever you find yourself repeating a task, it’s time to build your own tools and create a personalized workflow. A good automated workflow isn’t just about saving time—it’s about using automation to offload tedious, repetitive details, freeing up mental energy for the things that truly matter in the task. With the rise of Large Language Models (LLMs), even tasks that previously required fuzzy logic can now be automated.
On mobile, iOS Shortcuts offers a powerful way to design custom workflows. Pair it with tools like a-Shell, Scriptable, and Data Jar, and you can create powerful automation that integrates seamlessly with your daily tasks.
Sometimes, I forget the stroke order of a Chinese character and need to look it up. Instead of using third-party apps or look it up on the website every time, I built a Scriptable script that handles this for me automatically. It visit zdict.net, download and display the GIF. Now, with one tap, I can quickly find the stroke order without breaking my flow.
When learning German, I often need to record new vocabulary. To streamline this, I crafted a custom prompt for ChatGPT to help with translations and explanations, and then log the results into [[Obsidian]] for future study. I linked these steps into a single iOS Shortcut, making the process completely automated—from asking the question to saving the notes.
Job hunting can be an exhausting process, involving multiple platforms, browsing job descriptions (JDs), evaluating opportunities, and tracking applications. After repeating these steps too many times, I developed a Python application combined with Selenium for browser automation. This app scrapes job postings, pulls the JDs, and stores the data in a Django backend. I also integrated ChatGPT to compare my resume with the job descriptions, giving me a quick assessment of which positions are a good match. This allows me to filter out irrelevant jobs and focus on those with the highest potential, leaving only the final judgment to manual review.
By building your own workflow, you can minimize time wasted on tedious details and concentrate on what truly matters. Whether it’s handling simple everyday tasks or tackling more complex challenges like job hunting, there’s always room to streamline your process.
NotebookLM | Updated | ||
---|---|---|---|
Words | 173 | ||
Tags | Read | 1 minute |
In my opinion, Google’s NotebookLM is a very useful tool. It’s great at turning information from high-density format into low-density, easier-to-understand one, which helps with passive learning. Take podcasts, for example—while they don’t pack as much information as books or articles, our brains are naturally tuned to language. By repeatedly listening to conversations from different angles, we can trigger new thoughts and slowly absorb knowledge.
Listening to podcasts while doing other activities, like commuting or exercising, allows the information to sink in more naturally. Engaging with the discussions, debates, and ideas in these podcasts can spark thinking, helping us learn without much effort.
Books and articles, on the other hand, are more dense and require more focus and time to fully understand. To digest complex ideas, you usually need to take notes or reflect deeply, which takes more mental energy.
Podcasts offer a way to learn that doesn’t require as much effort. That’s why I find NotebookLM helpful—it makes it even easier to engage with lower-density formats like podcasts.
ChatGPT CO-STAR Prompt Optimizer Tool | Updated | ||
---|---|---|---|
Words | 252 | ||
Tags | Read | 1 minute |
I wrote an LLM prompt generator/optimizer ChatGPT - CO-STAR Prompt Optimizer base on How I Won Singapore’s GPT-4 Prompt Engineering Competition | by Sheila Teo | Towards Data Science
It use the CO-STAR technique:
and helps me to create relevant, creative, and precise responses that meet my needs most of the time.
iOS Safari Web Console with Scriptable and Eruda | Updated | ||
---|---|---|---|
Words | 549 | ||
Tags | Read | 2 minutes |
Sometimes I just want to quickly inspect a webpage on my phone without needing a desktop browser. Unfortunately, iOS Safari doesn’t come with built-in developer tools. But, I found a way around it by combining Scriptable and the web console library Eruda - Console for Mobile Browsers | Eruda Here’s how you can do it too.
We’re going to use a small script in the Scriptable app that loads Eruda right into Safari, giving you an interactive console on your iPhone or iPad. Follow the steps below to get everything running.
First, you’ll need to download the Scriptable app from the App Store. It lets you run JavaScript code directly on your iOS device.
Eruda is a lightweight console for mobile browsers. It’s perfect for inspecting elements and running JavaScript on the fly in Safari.
Now, use the code snippet from my gist
open web consle on iOS Safari · GitHub
to create a script in Scriptable
. This script injects the Eruda console into any webpage you’re viewing on Safari.
create a iOS shortcut so that webpage can be shared to the shortcut to invoke the script.
Once the script runs, Eruda will load at the bottom of the page. Click the icon of gear, and you can inspect elements, execute JavaScript, and generally poke around the site just like you would on a desktop browser.
Cheap and Capable | Updated | ||
---|---|---|---|
Words | 135 | ||
Tags | Read | 1 minute |
API price for gpt40-mini
is very cheap yet the model is very capable, it is basically a nearly-free boost for the input quality for larger [[LLM]] models.
Here is a prompt I used
Enhance the following text to improve its quality for processing by a larger language model:
1. Correct any grammatical or spelling errors.
2. Improve sentence structure and flow.
3. Clarify any ambiguous or vague statements.
4. Ensure logical coherence and progression of ideas.
5. Remove redundant information while preserving all key points.
6. Maintain the original tone and intent of the text.
7. Do not add new information or alter the core meaning.
Provide the enhanced text in a clear, concise format. If any part of the text is unclear or requires subject matter expertise to interpret, flag it with [NEEDS CLARIFICATION] at the end of the relevant sentence.
Game of Life in Threejs using WebGL | Updated | ||
---|---|---|---|
Words | 2226 | ||
Tags | Read | 1 minute |
This Game of Life uses threejs to utilize GPUs.
Implemented by Claude Sonnet 3.5 in 5 minutes.
easy config for org-mode preview latex | Updated | ||
---|---|---|---|
Words | 229 | ||
Tags | Read | 1 minute |
It’s so pleasant to use [[nix]] to install and config complex software packages.
Here is how to make emacs org work with latex
config-latex.nix
# https://nixos.wiki/wiki/TexLive
# For a minimal set of packages needed for Emacs Orgmode
{ pkgs, lib, ... }:
let
tex = (pkgs.texlive.combine {
inherit (pkgs.texlive)
scheme-basic dvisvgm dvipng # for preview and export as html
wrapfig amsmath ulem hyperref capt-of fontspec;
});
in { home.packages = lib.mkBefore [ tex ]; }
doom-emacs packages.el
(package! org-fragtog)
doom-emacs config.el
(use-package! org-fragtog
:config
(add-hook 'org-mode-hook 'org-fragtog-mode))
(after! org
(setq org-preview-latex-default-process 'dvisvgm)
(setq org-startup-with-latex-preview t))
org src-block execute pytest | Updated | ||
---|---|---|---|
Words | 771 | ||
Tags | Read | 2 minutes |
I recently wanted to practice some LeetCode and write documents and code in an org file. To quickly test the code, I wanted to use C-c C-c on a src-block to run pytest
. I created this snippet to enable that functionality.
(after! org
(defun org-babel-execute:python-with-pytest (body params)
"Execute a python source block with pytest if :pytest is specified."
(if (assq :pytest params)
(let* ((temporary-file-directory ".")
(temp-file (make-temp-file "pytest-" nil ".py")))
(with-temp-file temp-file
(insert body))
(unwind-protect
(org-babel-eval (format "pytest -v -s %s" temp-file) "")
(delete-file temp-file)))
(org-babel-execute:python-default body params)))
(advice-add 'org-babel-execute:python :override #'org-babel-execute:python-with-pytest))
usage example
#+begin_src python :pytest
def test():
assert Solution().mergeAlternately("abc", "pqr") == "apbqcr"
assert Solution().mergeAlternately("ab", "pqrs") == "apbqrs"
assert Solution().mergeAlternately("abcd", "pq") == "apbqcd"
class Solution:
def mergeAlternately(self, word1: str, word2: str) -> str:
longest = max(len(word1), len(word2))
def get_char(i, chs):
return chs[i] if i < len(chs) else ""
r = []
for i in range(0, longest):
r.append(get_char(i, word1))
r.append(get_char(i, word2))
return "".join(r)
#+end_src
I used the built-in tempo
to create a template. This allows me to run M-x insert-leetcode-solution, which inserts the template content and places the cursor on the line below “Problem”.
#+begin_src elisp :tangle config.el
(require 'tempo)
(tempo-define-template
"leetcode-solution"
'("* Problem"
n
p
n
"* Note"
n
"* Solution"
n
"#+begin_src python :pytest"
n
"#+end_src"
n))
(defun insert-leetcode-solution ()
(interactive)
(tempo-template-leetcode-solution))
#+end_src
transducer | Updated | ||
---|---|---|---|
Words | 1299 | ||
Tags | Read | 4 minutes |
I recently learned the concept of transducer and implement it in [[Gleam]] language.
GitHub - nohzafk/gtransducer: Transducer in Gleam language
Transducers originated in Clojure, designed to tackle specific challenges in functional programming and data processing. If you’re working with large datasets, streaming data, or complex transformations, understanding transducers can significantly enhance the efficiency and composability of your code.
At their core, transducers are composable functions that transform data. Unlike traditional functional programming techniques like map
, filter
, and reduce
, which are tied to specific data structures, transducers abstract the transformation logic from the input and output, making them highly reusable and flexible.
Transducers allow you to compose and reuse transformation logic across different contexts. By decoupling transformations from data structures, you can apply the same logic to lists, streams, channels, or any other sequential data structure. This makes your code more modular and adaptable.
One of the primary motivations for using transducers is to optimize data processing. Traditional approaches often involve creating intermediate collections, which can be costly in terms of performance, especially with large datasets. Transducers eliminate this overhead by performing all operations in a single pass, without generating intermediate results.
import time
from functools import reduce
# Traditional approach
def traditional_approach(data):
return [x * 2 for x in data if (x * 2) % 2 == 0]
# Transducer approach
def mapping(f):
def transducer(reducer):
def wrapped_reducer(acc, x):
return reducer(acc, f(x))
return wrapped_reducer
return transducer
def filtering(pred):
def transducer(reducer):
def wrapped_reducer(acc, x):
if pred(x):
return reducer(acc, x)
return acc
return wrapped_reducer
return transducer
def compose(t1, t2):
def composed(reducer):
return t1(t2(reducer))
return composed
def transduce(data, initial, transducer, reducer):
transformed_reducer = transducer(reducer)
return reduce(transformed_reducer, data, initial)
data = range(1000000)
# Measure traditional approach
start = time.time()
traditional_result = traditional_approach(data)
traditional_time = time.time() - start
# Measure transducer approach
xform = compose(
mapping(lambda x: x * 2),
filtering(lambda x: x % 2 == 0)
)
def efficient_reducer(acc, x):
acc.append(x)
return acc
start = time.time()
transducer_result = transduce(data, [], xform, efficient_reducer)
transducer_time = time.time() - start
# Results
print(f"Traditional approach time: {traditional_time:.4f} seconds")
print(f"Transducer approach time: {transducer_time:.4f} seconds")
print(f"Traditional is faster by: {transducer_time / traditional_time:.2f}x")
however when executed the transducer version is much slower in Python
Traditional approach time: 0.0654 seconds
Transducer approach time: 0.1822 seconds
Traditional is faster by: 2.78x
While transducers offer theoretical benefits in terms of composability and efficiency, Python might not be the best language for leveraging these advantages. Here’s why:
Python’s Function Call Overhead: Python has a relatively high overhead for function calls. Since transducers rely heavily on higher-order functions, this overhead can negate the performance gains that transducers are designed to offer.
Optimized Built-in Functions:
Python’s built-in functions like map
, filter
, and list comprehensions are highly optimized in C. These built-ins often outperform custom transducer implementations, especially for common tasks.
Efficient Mutation with Lists:
Python’s lists are mutable, and appending to a list in a loop is highly efficient. The traditional method of using list comprehensions or filter
and map
is often faster and more straightforward than setting up a transducer pipeline.
Transducers shine in functional programming languages that emphasize immutability and composability, such as Clojure or Gleam. In these languages, transducers can significantly reduce the overhead of creating intermediate collections and improve performance in complex data pipelines. They’re especially powerful when working with immutable data structures, where avoiding unnecessary copies is crucial for efficiency.
In contrast, Python’s strength lies in its mutable data structures and optimized built-in functions, which often make traditional approaches more performant. However, if you’re working in a functional programming environment where immutability is the norm, or if you need to maintain a consistent API across various data sources, transducers can be a valuable tool.
Transducers are a powerful tool in the right context, but Python’s inherent characteristics—such as function call overhead and optimized built-ins—mean that traditional approaches may be more efficient for typical data processing tasks. If you’re working in a language that deeply benefits from transducers, like Gleam, they can greatly enhance your code. In Python, however, it’s often best to use the language’s strengths, such as list comprehensions and optimized built-ins, for performance-critical applications.