PXAI
v8.42
Feed
Viral
World
Politics
Technology
Daily Briefing
Sources
|
ToS
Version History
Version
Description
Date
8.42
[ADDED SOURCE REGISTRY LIST VIEW]
04/03/2026 16:19
8.41
[LOCALIZED INDEX.PHP MENU AND MODALS TO US ENGLISH]
04/03/2026 16:11
8.40
[TRANSLATED FRONTEND TO US ENGLISH]
04/03/2026 16:10
8.41
[RESTORED DETAILED SOURCES VIEW (ARTICLES + EVENTS) IN US ENGLISH]
04/03/2026 15:50
8.41
[RESTORED DETAILED SOURCES VIEW (ARTICLES + EVENTS) IN US ENGLISH]
04/03/2026 15:46
8.40
[REVERTED TO STANDARD TABLE LAYOUT (US ENGLISH)]
04/03/2026 15:40
8.39
[FIXED DB CONNECTION SCOPE IN SOURCES LOGIC]
04/03/2026 15:38
8.38
[TRANSLATED SOURCES VIEW TO US ENGLISH]
04/03/2026 15:01
8.37
[FULL FRONTEND TRANSLATION (MENU, FEED, COMMENTS, TTS) TO US ENGLISH.]
04/03/2026 14:58
8.67
[MANUAL OVERRIDE OF GREEK MENU ITEMS]
04/03/2026 13:36
8.66
[NGINX OPTIMIZED MENU FIX]
04/03/2026 13:33
8.65
[FIXED GREEK MENU ITEMS]
04/03/2026 13:32
8.60
[MENU & UI LOCALIZATION TO US ENGLISH]
04/03/2026 13:30
8.50
[FULL TRANSLATION TO US ENGLISH]
04/03/2026 13:27
9.75
[INJECTED EVENT FUSION_SUMMARY INTO FEED LOOPS TO DISPLAY AI TAGS CORRECTLY]
26/02/2026 14:44
PXAI Audio Feed
+5
ΟΛΑ
24/03 17:36
dev.to
Catching a vLLM Latency Spike with eBPF and an Open-Weight LLM
eBPF
vLLM
MiniMax
Ollama
MCP
open‑source
24/03 05:05
dev.to
The Ultimate Job Finding-Management Tool
job search
chrome extension
ollama
skill alignment
semi‑automation
neon UI
23/03 17:37
dev.to
🚀 Local AI in 2026: My Journey Through the Desert (From Terminal to GPU)
local AI
MacBook Pro M1
Ollama
GPU
Claude
Gemini
23/03 02:54
dev.to
AuraCoreCF: a local‑first cognitive runtime (not another chatbot wrapper)
AuraCoreCF
local‑first
cognitive runtime
LLM
internal state
AI agent
23/03 00:02
dev.to
Top 6 AI Agent Memory Frameworks for Devs (2026)
AI memory frameworks
Mem0
Zep
Letta
Cognee
LangChain
22/03 19:11
dev.to
I built a local-first AI CLI to replace Copilot (using Ollama)
local-first AI
CLI assistant
Ollama
terminal
offline
code generation
22/03 14:35
dev.to
Local AI in 2026: Running Production LLMs on Your Own Hardware with Ollama
local AI
LLMs
Ollama
cost savings
consumer hardware
benchmark
22/03 14:21
dev.to
Running Qwen2.5-32B on RTX 4060 8GB — Beating M4 at 10.8 t/s with llama.cpp
Qwen2.5-32B
RTX 4060
llama.cpp
hybrid inference
local LLM
token throughput
22/03 14:19
dev.to
llama.swap Model Switcher Quickstart for OpenAI-Compatible Local LLMs
llama‑swap
local LLM
OpenAI API
proxy
model switching
vLLM
22/03 14:19
dev.to
llama.swap Model Switcher Quickstart for OpenAI-Compatible Local LLMs
llama‑swap
local LLM
OpenAI API
proxy
model switching
vLLM
Comments
Loading...
Send
Dev Changelog
v8.42
No logs found in database.
0
Display Settings
Size
Aa
Brightness
Theme
Dark
Comments