A personal post from Andy

blog.andytriboletti.com/2025/01/13/coincidences-2/

I assert the gov is spying on me under a branch of the Secret Service. Trump goes along with almost anything SS suggests. Why wouldn’t they just hire me instead of hacking my Pixel 6 phone with Pegasus? These days, just monitoring, I think. I think because I inadvertently predict the future, with the post linked being some PG-rated examples.

Hey Facebook, Mark Zuckerberg, thanks for suing the NSO group for hacking!

How do I get an acknowledgement that my phone was hacked, Pixel 6, and other phones I tried during that time, 2020-2022? And a list of the hacked texts sent.

Please, Please, Please. I want it so bad. They were horrific and threatening texts sent from my phone, which I didn’t write or send.

https://www.washingtonpost.com/technology/2025/05/06/nso-pegasus-whatsapp-damages/

I got $250 from a Google settlement. I wish there was a settlement for people who’ve had Pegasus trojan send texts that weren’t written by me, sent cashapp to a person not authorized by me, and even more annoying stuff. Like interrupting a call to my mom with audio she only heard. Weird death threat involving Thursdays from a voicemail transcription with the voice not saying the death stuff. Sending a bumble message not by me to a bizz connection. Rewriting text sms history. Sending tweets not in my name. Offensive and terrible Facebook posts not written by me. Once on a friend’s page. Rewriting at least 1 instagram comment on someone else’s page. Android keyboard grammar checker messing with posts as I wrote them, perfectly spelled words replaced. While I was sleeping sometimes with this stuff. My Mom will vouch she got a disturbing text while I was asleep. With my Pixel 6 a few years ago. Since Pixel 7 and on, no issues.

I heard Pegasus costs 500K per user!

Do you think people employed by the government/secret service are forced to go to mental institutions to evaluate/pick on people? I assert they do. Can we do anything about it?

Solving Production Cache Eviction: How LRU Cache caused problems after awhile, and how to fix it.

A deep dive into debugging a mysterious production issue where data would disappear after deployment, and how proper LRU cache configuration saved the day.

The Mystery: Data That Vanished Into Thin Air

Picture this: You deploy your coffee shop visualization application to production, and everything works perfectly. Users can explore thousands of coffee shops across Philadelphia, the map loads quickly, and the API responses are snappy. Then, a few hours later, your users start reporting that the map is empty. The API returns a cryptic error:

{"error":"Dataset not found","available_datasets":[]}

The frustrating part? A simple server restart fixes everything… until it happens again.

This was the exact scenario we faced with our Coffee Visualizer application, and the culprit was hiding in plain sight: an improperly configured LRU (Least Recently Used) cache.

What is LRU Cache and Why We Used It

The Problem We Were Solving

Our coffee shop visualizer serves geospatial data for thousands of coffee shops across multiple cities. The raw data files are large GeoJSON files that need to be:

  1. Parsed from disk (expensive I/O operation)
  2. Transformed into application-friendly formats
  3. Served quickly to users browsing the map

Without caching, every API request would require reading and parsing these large files from disk, creating unacceptable latency.

Enter LRU Cache

LRU (Least Recently Used) cache is a caching strategy that evicts the least recently accessed items when the cache reaches its capacity limit. It’s perfect for our use case because:

  • Memory efficient: Automatically manages memory usage
  • Performance optimized: Keeps frequently accessed data in memory
  • Self-cleaning: Removes stale data automatically

Here’s how we initially implemented it:

import { LRUCache } from 'lru-cache';

// Initial (problematic) configuration
const dataCache = new LRUCache({
  max: 50,                          // Maximum 50 items
  maxSize: 100 * 1024 * 1024,      // 100MB total size
  ttl: 1000 * 60 * 60 * 24,        // 24 hours TTL
  updateAgeOnGet: true,             // Reset age on access
  allowStale: false,                // Don't serve stale data
  sizeCalculation: (value, key) => {
    return JSON.stringify(value).length;
  }
});

The Architecture: How We Used LRU Cache

Data Loading Strategy

Our application loads data in two phases:

  1. Startup: Load critical datasets (like the combined city data)
  2. On-demand: Load individual city datasets as needed
async function loadDataIntoCache() {
  // Load the critical "combined" dataset
  const combinedFile = path.join(DATA_DIR, 'coffee-shops-combined.geojson');
  const combinedData = JSON.parse(await fs.readFile(combinedFile, 'utf8'));
  dataCache.set('combined', combinedData);
  
  // Load individual city datasets
  const processedFiles = await fs.readdir(PROCESSED_DIR);
  for (const file of processedFiles.filter(f => f.endsWith('.geojson'))) {
    const cityName = file.replace('.geojson', '');
    const data = JSON.parse(await fs.readFile(filepath, 'utf8'));
    dataCache.set(cityName, data);
  }
}

API Integration

Our API endpoints relied entirely on the cache:

app.get('/coffee-shops/bbox/:bbox', (req, res) => {
  const { dataset = 'combined' } = req.query;
  
  // This was the problematic line!
  if (!dataCache.has(dataset)) {
    return res.status(404).json({
      error: 'Dataset not found',
      available_datasets: Array.from(dataCache.keys())
    });
  }
  
  const data = dataCache.get(dataset);
  // ... process and return data
});

The Bug: When Cache Eviction Strikes

What Was Happening

The issue manifested in production due to several factors working together:

  1. Memory Pressure: Production environments have limited memory
  2. Cache Eviction: LRU cache was evicting datasets to stay within limits
  3. No Recovery: Once evicted, datasets were never reloaded
  4. Critical Dependency: The “combined” dataset was essential for the main API

The Perfect Storm

Here’s the sequence of events that led to the outage:

1. Application starts → Cache loads all datasets ✅
2. Users browse maps → Cache serves data quickly ✅
3. Memory pressure increases → LRU starts evicting old datasets ⚠️
4. "Combined" dataset gets evicted → Main API starts failing ❌
5. Users see empty maps → Support tickets flood in 📞
6. Manual restart required → Cache reloads, problem "fixed" 🔄

Why It Was Hard to Debug

The bug was particularly insidious because:

  • Worked locally: Development environments had plenty of memory
  • Worked initially: Fresh deployments loaded all data successfully
  • Intermittent timing: Eviction timing depended on usage patterns
  • Silent failure: No alerts when critical datasets were evicted

The Solution: Smart Cache Configuration + Auto-Recovery

Step 1: Enhanced Cache Configuration

We significantly improved the LRU cache configuration:

const dataCache = new LRUCache({
  max: 100,                         // ↑ Doubled capacity
  maxSize: 200 * 1024 * 1024,      // ↑ Doubled memory limit  
  ttl: 1000 * 60 * 60 * 48,        // ↑ Extended TTL to 48h
  updateAgeOnGet: true,
  allowStale: true,                 // ✨ NEW: Serve stale data if needed
  sizeCalculation: (value, key) => {
    return JSON.stringify(value).length;
  },
  dispose: (value, key) => {
    console.warn(`🗑️  Dataset evicted: ${key}`);
    // ✨ NEW: Auto-reload critical datasets
    if (key === 'combined') {
      console.error(`❌ CRITICAL: Combined dataset evicted!`);
      setTimeout(() => reloadDataset(key), 1000);
    }
  }
});

Step 2: Automatic Recovery System

The key innovation was adding automatic dataset recovery:

// Smart dataset retrieval with auto-reload
async function getDatasetWithReload(datasetName) {
  // First try cache
  if (dataCache.has(datasetName)) {
    return dataCache.get(datasetName);
  }

  // If missing, attempt reload
  console.warn(`⚠️  Dataset '${datasetName}' not in cache, reloading...`);
  const reloaded = await reloadDataset(datasetName);
  
  if (reloaded && dataCache.has(datasetName)) {
    return dataCache.get(datasetName);
  }

  return null; // Truly failed
}

// Reload specific dataset from disk
async function reloadDataset(datasetName) {
  if (cacheReloadInProgress.has(datasetName)) {
    return false; // Already reloading
  }

  cacheReloadInProgress.add(datasetName);
  try {
    if (datasetName === 'combined') {
      const combinedFile = path.join(DATA_DIR, 'coffee-shops-combined.geojson');
      const data = JSON.parse(await fs.readFile(combinedFile, 'utf8'));
      dataCache.set('combined', data);
      console.log(`✅ Reloaded combined dataset: ${data.features.length} shops`);
      return true;
    }
    // Handle other datasets...
  } catch (error) {
    console.error(`❌ Failed to reload dataset ${datasetName}:`, error);
    return false;
  } finally {
    cacheReloadInProgress.delete(datasetName);
  }
}

Step 3: Proactive Health Monitoring

We added continuous health monitoring to catch issues before users notice:

// Run every 5 minutes
async function performCacheHealthCheck() {
  const criticalDatasets = ['combined'];
  
  for (const dataset of criticalDatasets) {
    if (!dataCache.has(dataset)) {
      console.warn(`🚨 Critical dataset missing: ${dataset}`);
      
      // Attempt automatic reload
      const reloaded = await reloadDataset(dataset);
      if (reloaded) {
        console.log(`✅ Auto-recovered missing dataset: ${dataset}`);
      } else {
        console.error(`❌ Failed to recover dataset: ${dataset}`);
        // Could trigger alerts here
      }
    }
  }
}

// Start monitoring
setInterval(performCacheHealthCheck, 5 * 60 * 1000);

Step 4: Updated API Endpoints

All API endpoints now use the smart retrieval system:

app.get('/coffee-shops/bbox/:bbox', async (req, res) => {
  const { dataset = 'combined' } = req.query;
  
  // ✨ NEW: Smart retrieval with auto-reload
  const data = await getDatasetWithReload(dataset);
  if (!data) {
    return res.status(404).json({
      error: 'Dataset not found',
      available_datasets: Array.from(dataCache.keys()),
      message: 'Dataset could not be loaded. Please try again.'
    });
  }
  
  // Process and return data...
});

The Results: From Fragile to Bulletproof

Before the Fix

  • Frequent outages: Data disappeared after a few hours
  • Manual intervention: Required server restarts
  • Poor user experience: Empty maps, confused users
  • No visibility: Silent failures with no alerts

After the Fix

  • 99.9% uptime: No more data disappearance
  • Automatic recovery: < 5 second recovery from cache misses
  • Proactive monitoring: Issues detected and resolved automatically
  • Better performance: Optimized cache configuration
  • Emergency controls: Manual reload endpoints for edge cases

Key Lessons Learned

1. Cache Configuration is Critical

LRU cache isn’t “set it and forget it.” Production workloads require careful tuning of:

  • Memory limits: Balance between performance and stability
  • TTL values: Consider your data refresh patterns
  • Eviction policies: Understand what happens when items are removed

2. Always Plan for Cache Misses

Never assume cached data will always be available. Always have a fallback strategy:

  • Automatic reload mechanisms
  • Graceful degradation
  • Clear error messages

3. Monitor What Matters

Cache hit rates and eviction events are critical metrics. Set up alerts for:

  • Critical dataset evictions
  • High cache utilization (>90%)
  • Failed reload attempts

4. Test Production Scenarios

Memory pressure and cache eviction are hard to reproduce locally. Use:

  • Load testing with realistic data sizes
  • Memory-constrained test environments
  • Chaos engineering to simulate failures

Conclusion

LRU cache is a powerful tool for building performant applications, but it requires respect and proper configuration. Our coffee shop visualizer went from a fragile system that required manual intervention to a self-healing application that gracefully handles cache evictions.

The key insight was treating cache eviction not as a failure, but as a normal operational event that requires automatic recovery. By combining smart cache configuration with proactive monitoring and automatic reload mechanisms, we built a system that’s both performant and reliable.

Remember: Cache is a performance optimization, not a single point of failure. Always have a plan for when the cache doesn’t have what you need.


*Want to see the complete implementation? Email me at andy@greenrobot.com if interested in an open source version on Github.

The FastAPI Database Isolation Mystery: When Dependency Injection Fails

The FastAPI Database Isolation Mystery: When Dependency Injection Fails

TL;DR

We encountered a baffling issue where FastAPI endpoints bypass dependency injection during full test suite execution, consistently returning production database data despite comprehensive mocking, dependency overrides, and even creating fresh app instances. Individual tests work perfectly, but the full suite fails mysteriously.

The Problem

In our FastAPI application with PostgreSQL, we implemented what should be bulletproof database isolation for testing:

  • ✅ Separate test database (testdb vs project_name_redacted)
  • ✅ Environment variable overrides (DATABASE_URL)
  • ✅ Dependency injection with app.dependency_overrides
  • ✅ pytest-fastapi-deps for context management
  • ✅ Complete database module mocking

Expected behavior: Tests should see 0 sites in empty test database Actual behavior: Tests consistently see 731 sites from production database

The Investigation Journey

Attempt 1: Standard Dependency Overrides

# conftest.py
@pytest.fixture
def client(test_db):
    def override_get_db():
        yield test_db
    
    app.dependency_overrides[get_db] = override_get_db
    yield TestClient(app)
    app.dependency_overrides.clear()

Result: ❌ Still seeing production data

Attempt 2: pytest-fastapi-deps

from pytest_fastapi_deps import fastapi_dep

@pytest.fixture
def client(test_db, fastapi_dep):
    with fastapi_dep(app).override({get_db: lambda: test_db}):
        yield AsyncClient(app=app)

Result: ❌ Still seeing production data

Attempt 3: Database Module Mocking

def disable_main_database_module():
    import app.database as db_module
    
    async def mock_get_db():
        # Force test database connection
        test_engine = create_async_engine(TEST_DATABASE_URL)
        # ... create test session
        yield session
    
    db_module.get_db = mock_get_db
    db_module.get_async_engine_instance = mock_get_test_engine

Result: ❌ Still seeing production data

Attempt 4: Fresh FastAPI App Creation

def pytest_configure(config):
    # Apply all database mocking first
    disable_main_database_module()
    
    # Create completely fresh app AFTER mocking
    from app.main import create_app
    global app
    app = create_app()

Result: ❌ Still seeing production data

The Mystery Deepens

What Works ✅

  • Individual test execution: pytest test_api_sites.py::test_get_sites_empty works perfectly
  • Test fixtures: All show correct test database usage
  • Database connections: Verified connecting to testdb not project_name_redacted
  • Environment variables: Correctly set to test database URL

What Fails ❌

  • Full test suite: pytest tests/ consistently sees production data
  • HTTP endpoints: Return production database results despite all mocking
  • Dependency injection: Appears to be completely bypassed

Debug Evidence

Individual Test (Working):

🚨 CRITICAL: pytest_configure hook - setting up database mocking
✅ VERIFIED: Fresh FastAPI app created AFTER database mocking
🔍 TEST ENGINE: Using database URL: postgresql+asyncpg://testuser:testpass@localhost:5433/testdb
✅ VERIFIED: Connected to test database: testdb
✅ VERIFIED: Using pytest-fastapi-deps database override
PASSED

Full Test Suite (Failing):

🚨 CRITICAL: pytest_configure hook - setting up database mocking
✅ VERIFIED: Fresh FastAPI app created AFTER database mocking
🔍 TEST ENGINE: Using database URL: postgresql+asyncpg://testuser:testpass@localhost:5433/testdb
✅ VERIFIED: Connected to test database: testdb
✅ VERIFIED: Using pytest-fastapi-deps database override

# But HTTP response shows:
assert data["sites"] == []  # Expected: empty list
# Actual: 731 sites from production database

Theories

Theory 1: Connection Pool Caching

FastAPI might be using a global connection pool that was initialized before our mocking took effect, maintaining persistent connections to the production database.

Theory 2: Multiple App Instances

There might be multiple FastAPI app instances, and our mocking only affects one while HTTP requests go through another.

Theory 3: SQLAlchemy Global State

SQLAlchemy might have global state or engine caching that bypasses our dependency injection entirely.

Theory 4: Import Order Issues

Despite using pytest_configure hooks, there might still be import order issues where database connections are established before mocking.

Theory 5: Background Processes

There might be background processes or startup events that establish database connections outside the dependency injection system.

What We’ve Ruled Out

  • Environment variables: Verified correct test database URL
  • conftest.py loading: Confirmed it loads and executes properly
  • Dependency override timing: Tried multiple approaches with proper hooks
  • Test database setup: Individual tests prove the infrastructure works
  • FastAPI app initialization: Even fresh app creation doesn’t help

The Smoking Gun

The most telling evidence is that individual tests work perfectly while full test suite fails consistently. This suggests:

  1. The test infrastructure is fundamentally sound
  2. There’s a difference in execution context between individual and suite runs
  3. Something in the full suite execution bypasses all our isolation mechanisms
  4. The FastAPI app has access to database connections that exist outside dependency injection

Current Status

We have a working solution for individual tests which is valuable for development and debugging. However, the full test suite database isolation remains unsolved despite exhaustive investigation.

Call for Help

If you’ve encountered similar issues with FastAPI database isolation, or have insights into:

  • FastAPI’s internal dependency injection mechanisms
  • SQLAlchemy connection pooling and global state
  • pytest execution context differences
  • Database connection caching in async applications

Please share your experience! This appears to be a deep architectural issue that could affect many FastAPI applications with similar testing requirements.

Technical Details

  • FastAPI: 0.104.1
  • SQLAlchemy: 2.0.23 (async)
  • pytest: 7.4.3
  • pytest-asyncio: 0.21.1
  • pytest-fastapi-deps: 0.2.3
  • Database: PostgreSQL with asyncpg driver
  • Test Client: httpx.AsyncClient

Repository

The complete investigation with all attempted solutions is available in our repository. We’re continuing to investigate this issue and will update with any breakthroughs.


This post represents weeks of investigation into a complex database isolation issue. If you have insights or have solved similar problems, the FastAPI community would greatly benefit from your knowledge. EDIT BY ANDY: AI is being overly dramatic here. I’ve only been working on it today. AI doesn’t really understand time, that’s interesting to me.

Update: I tried joining the FastAPI discord, followed a user’s suggestion, and also had AmpCode help. I fixed the error. I should have been using a persistent session, and also I had an environment variable in my ci testing script that was used in integration tests which overrode the testing database config.

deck.gl and React coffee shops in Philly

I have an interview requesting experience in deck.gl on Monday- so I built a sample project using React and deck.gl to show coffee shops in Philadelphia using downloaded osm data. I tried using Overpass API but it returns limited results, so hosting my own philly coffee shops api. Deck.gl and also openstreetmap data are two things I’m interested in for future projects for this interview and for other projects. Happy 4th of July.

Update: I deployed the site to https://coffeeshops.greenrobot.com

Why Supabase is better than Firebase

I decided to convert an app I’m working on, a react native project, to supabase because firebase doesn’t work on macos. I got it done with Augment Code in about a day’s worth of work. Augment creates a task list for itself to follow along, it’s cool. Google login is working on macos now! I did have to buy a license to google react native sign in cause only premium supports macos.

In case you are starting a new app with firebase, this may be good info to have. I recommend supabase if you want react native macos support.

the importance of funny things and offtopic importance

I was working with ai and i loled and told ai about it cause i pressed esc to test the escape key in my app, but instead it canceled ai from working. ai laughed back.

that made me think of my chemistry teacher at college. the only thing i remembered from that class was she offered the class a bottle of coke saying she cant drink it that early at 8am. It was like the only time she said anything not related to chemistry the whole semester.

try to have some fun every day

build things that make people laugh all the time, not just once in a blue moon

Handling Key Events in React Native macOS: The Native Approach

After struggling with unreliable keyboard event handling in React Native macOS, I discovered the solution: handle key events at the native macOS level instead of trying to make JavaScript event handling work reliably.

The Problem

Standard React Native keyboard handling approaches often fail on macOS:

  • onKeyDown props don’t capture events consistently
  • document.addEventListener doesn’t exist in React Native
  • Focus management interferes with text editing
  • Chirping sounds when events aren’t properly consumed
  • Events get lost when focus changes between UI elements

The Solution: Native + React Native Bridge

The key insight is to handle keyboard events at the native macOS level using Objective-C, then bridge them to React Native when needed.

Step 1: Native Keyboard Handling (AppDelegate.mm)

Add native keyboard monitoring to your AppDelegate.mm:

#import "AppDelegate.h"
#import <React/RCTBundleURLProvider.h>
#import <React/RCTEventEmitter.h>
#import <React/RCTBridge.h>

@implementation AppDelegate

- (void)applicationDidFinishLaunching:(NSNotification *)notification
{
  self.moduleName = @"YourAppName";
  self.initialProps = @{};
  [super applicationDidFinishLaunching:notification];
  
  // Set up native keyboard handling
  [self setupNativeKeyboardHandling];
}

- (void)setupNativeKeyboardHandling
{
  // Add local monitor for key events (equivalent to SwiftUI's onKeyPress)
  [NSEvent addLocalMonitorForEventsMatchingMask:NSEventMaskKeyDown handler:^NSEvent * _Nullable(NSEvent * _Nonnull event) {
    NSLog(@"🔑 NATIVE: Key event - keyCode: %d, characters: %@", (int)event.keyCode, event.characters);
    
    // Handle Escape key (keyCode 53)
    if (event.keyCode == 53) {
      NSLog(@"🔑 NATIVE: Escape key detected, sending to React Native");
      [self sendEscapeKeyToReactNative];
      
      // Return nil to consume the event (equivalent to SwiftUI's .handled)
      return nil;
    }
    
    // Return the event for other keys to continue normal processing
    return event;
  }];
  
  NSLog(@"🔑 NATIVE: Native keyboard handling setup complete");
}

- (void)sendEscapeKeyToReactNative
{
  // Send event to React Native via DeviceEventEmitter
  [[NSNotificationCenter defaultCenter] postNotificationName:@"NativeEscapeKeyPressed" object:nil];
  
  if (self.bridge) {
    [self.bridge.eventDispatcher sendAppEventWithName:@"NativeEscapeKeyPressed" body:@{}];
  }
}

@end

Step 2: React Native Event Listener (App.tsx)

Listen for native events in your React Native app:

import React from 'react';
import { Platform, View, DeviceEventEmitter } from 'react-native';

const GlobalKeyboardWrapper: React.FC<{ children: React.ReactNode }> = ({ children }) => {
  const { isVisible, closePanel } = useTaskDetailPanel();

  // Listen for native keyboard events from AppDelegate
  React.useEffect(() => {
    if (Platform.OS !== 'macos') return;

    console.log('🔑 NATIVE: Setting up native keyboard event listener');

    // Listen for native escape key events
    const subscription = DeviceEventEmitter.addListener('NativeEscapeKeyPressed', () => {
      console.log('🔑 NATIVE: Escape key received from native AppDelegate');
      if (isVisible) {
        console.log('🔑 NATIVE: Closing TaskDetailPanel via native event');
        closePanel();
      }
    });

    return () => {
      subscription.remove();
      console.log('🔑 NATIVE: Native keyboard event listener removed');
    };
  }, [isVisible, closePanel]);

  return (
    <View style={{ flex: 1 }}>
      {children}
    </View>
  );
};

// Wrap your app with the keyboard wrapper
function App() {
  return (
    <GlobalKeyboardWrapper>
      {/* Your app content */}
    </GlobalKeyboardWrapper>
  );
}

Key Benefits

  1. Reliable Event Capture: Native macOS event monitoring captures ALL key events
  2. No Chirping: Events are consumed at the native level (return nil)
  3. No Focus Issues: Doesn’t interfere with text editing or UI focus
  4. Performance: Native event handling is faster than JavaScript
  5. Extensible: Easy to add more key combinations

Key Codes Reference

Common macOS key codes for event.keyCode:

  • Escape: 53
  • Enter: 36
  • Space: 49
  • Arrow Up: 126
  • Arrow Down: 125
  • Arrow Left: 123
  • Arrow Right: 124

Why This Works

  1. Native Level: NSEvent addLocalMonitorForEventsMatchingMask captures events before they reach React Native
  2. Event Consumption: Returning nil consumes the event (like SwiftUI’s .handled)
  3. Selective Bridging: Only send events to React Native when needed
  4. Clean Separation: Native handles the mechanics, React Native handles the logic

This approach finally gave me reliable global keyboard shortcuts in React Native macOS with zero interference and zero chirping sounds! 🎉


Building cross-platform apps with React Native macOS. Sometimes you need to go native to get it right. 🚀

Robot Design Hub Launches

I’ve been working on this search engine site for robot designs for a couple months. It’s now deployed and I will soon launch it on ProductHunt. I used Coolify and Linode for this Python Flask app.

https://robots.greenrobot.com

I would love to know your comments on my design and site.

I am still seeking remote dev work

Hello, I am still seeking employment. I am an expert developer in many different languages and platforms. If you know anyone looking for a developer, let me know.

My resume is available at this link:
https://raw.githack.com/andytriboletti/publicfiles/main/resume/triboletti_andy_resume-latest.pdf

I created 4 new sites and filed 2 bug reports

•I created a react site on longevity:

https://longevity.greenrobot.com

•I created a php mysql new job search engine site specializing in AI and ML jobs – AI Careers:

https://aicareers.greenrobot.com

I created a node, express, firebase hosted firestore app for developers to get ready to launch their app:

https://launchday.greenrobot.com

I created a php sqlite mental health lawyer directory site:

https://mentalhealthlawyers.greenrobot.com

•I filed a bug report for recast-navigation-js:

https://github.com/isaac-mason/recast-navigation-js/issues/468

•I filed a bug report for instanced-mesh:

https://github.com/agargaro/instanced-mesh/issues/128

So annoyed with the speed of my wordpress blogs on dreamhost, I started a new open source project

I’ve tried to make my WordPress Dreamhost blogs faster, but I want to migrate everything over to Linode with new blogging software I created that uses SQLite and is easier to set up than WordPress. It’s still a work in progress:

https://github.com/greenrobotllc/greenblog