Knowledge Wiki¶
Concepts¶
- .gitpod.yml configuration — Configuration file for Gitpod workspaces that specifies Docker image, build steps, and development environment customization, with defaults supporting Maven, Gradle, and Java without additional configuration.
- @Configuration proxyBeanMethods — The proxyBeanMethods parameter in @Configuration annotation controls whether @Bean methods are proxied to ensure single-instance behavior (Full mode) or create new instances on each call (Lite mode)
- @ConfigurationProperties binding — Annotation-based mechanism for binding external configuration properties to strongly-typed Java beans, using a prefix attribute to map properties hierarchically.
- @SpringBootApplication annotation — Core Spring Boot annotation that combines @Configuration, @EnableAutoConfiguration, and @ComponentScan to bootstrap the application context from the main method
- /steer Command — Mid-task course correction mechanism allowing users to inject guidance and redirect agent focus without interrupting execution—preserving completed work, prompt cache, and conversation history while addressing task drift during long-running research or analysis.
- $http service in Angular — Angular's built-in service for making HTTP requests to external servers, enabling AJAX communication within Angular applications.
- 2080-learning-principle|20/80 Learning Principle — A focused learning strategy that prioritizes mastering the 20% of a technology that delivers 80% of practical value, emphasizing goal-oriented problem solving over comprehensive study.
- 2080|20/80學習原則 — 專注於學習一門技術中20%的核心內容,這部分內容能提供80%的實際價值,強調目標導向和問題解決而非全面深入學習
- [[23|23种经典设计模式]] — The canonical collection of design patterns categorized into creational, structural, and behavioral types, originally documented by the Gang of Four (Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides) forming the foundation of reusable object-oriented software architecture.
- [[24|24小时重启系统]] — A one-day execution framework designed to break self-reinforcing loss-of-control loops by creating closed feedback loops where the brain receives 'I won' signals through visible deliverables and MIT completion.
- [[32-bit-odbc-architecture-requirement|32-bit ODBC architecture requirement]] — ER/Studio 8 (32-bit) specifically requires 32-bit MySQL ODBC drivers rather than 64-bit drivers, highlighting the importance of matching application and driver architecture.
- A/B Testing Deployment — A traffic routing technique based on user attributes rather than deployment logic, enabling parallel execution of different versions to test user experience variations like page layout or button colors using parameters like cookies, user agents, or geographic location.
- Absolute vs relative buffer access — Two access modes for reading ByteBuf data: absolute access using getByte(i) reads without moving readerIndex, while relative access using readByte() consumes data by advancing the pointer.
- Absolute vs relative ByteBuf access — Two access modes for reading buffer contents: absolute access (getByte) reads without moving pointers, while relative access (readByte) advances readerIndex.
- AbstractPipeline — Abstract base class that serves as the core implementation of Stream interfaces, managing the construction and evaluation of stream pipelines.
- AbstractProcessor — A Java API class used to create annotation processors that execute during compilation, enabling compile-time code generation and validation.
- Access Log Service (ALS) — Istio's service for capturing and exporting HTTP/TCP access logs from service mesh workloads to external backends like OpenTelemetry
- ACME challenge validation — Domain ownership verification method used by Let's Encrypt requiring HTTP accessibility of validation files at specific .well-known paths
- Action button types — Configurable button behaviors including 'type link' for URL navigation and other action types that determine what happens when a button is clicked.
- activeTab permission in Chrome Extensions — Permission granting temporary access to the currently active tab when user clicks the extension action, enabling script execution without broad host permissions
- Admonition plugin — Obsidian plugin for creating collapsible, customizable callout blocks with support for custom titles, icons, colors, and various admonition types.
- Admonition plugin (Obsidian) — Legacy Obsidian plugin for creating styled callout boxes with custom titles, collapsible sections, icons, and colors; functionality largely superseded by native callouts.
- Admonition plugin syntax — Legacy Obsidian plugin syntax for creating styled callout blocks using ```ad-
code blocks with optional configuration for title, collapse state, icon, and color. - Admonition Syntax — The specific Markdown blockquote syntax using > [!type] notation to create callouts, with optional + (always open) or - (default closed) modifiers
- Admonition type syntax — Markdown code block syntax pattern using ```ad-
format to create different styled information boxes in Obsidian, supporting types like note, question, bug, and warning. - Admonition types — Predefined content block categories including note, question, bug, and warning types, each with distinct visual styling and default icons.
- Advanced Java Concurrency — Multi-threaded programming techniques including thread pools, locks, synchronized blocks, concurrent collections, atomic variables, CompletableFuture for async operations, and Striped64 for high-performance counters under contention.
- Agent runtime management — Unified dashboard system for managing all computing resources (local daemon + cloud runtime) with automatic detection of available CLI tools and agent capabilities.
- Agent Skills — 由 Addy Osmani 提出的结构化 AI 编码工作流框架,将资深工程师的软件开发生命周期(需求定义、规划、实现、测试、审查、上线)打包为可复用的代理技能,通过 7 个命令入口(/spec、/plan、/build、/test、/review、/code-simplify、/ship)强制 AI 代理遵循完整的工程流程而非跳过关键步骤。
- Agent Teams Pattern (Claude Code) — An experimental workflow where multiple Claude Code instances communicate directly through shared task lists rather than through a central coordinator, suitable for cross-role collaboration but with significantly higher token consumption (4-7x single session).
- Agentic AI workflow patterns — Multi-step AI agent capabilities including planning, tool usage, error recovery, and decision-making for complex tasks like research monitoring, code assistance, and document analysis.
- Aggregated Payment System — A payment platform that integrates multiple payment channels (WeChat Pay, Alipay, etc.) into a unified interface, allowing merchants to accept payments through various methods without separate integrations for each provider.
- AI Agent consciousness monitoring — Practice of visualizing and managing the complete internal state of autonomous AI agents including memory, skills, learned patterns, and resource consumption.
- AI Agent consciousness visualization — Practice of creating visual dashboards to monitor AI agent's internal state including memory, skills, session history, behavioral patterns, error corrections, and token consumption across multiple themed UI panels
- AI Agent token cost tracking — Monitoring capability that tracks and displays token consumption and associated costs broken down by model, helping users understand resource usage patterns of AI agents
- AI coding agent behavior patterns — Common failure modes in AI coding agents include overconfidence, guessing instead of clarifying, over-engineering, random refactoring, and lacking verification - these are the problems the framework addresses.
- AI VTuber plugin system — Extensible architecture supporting third-party plugins for platforms like Bilibili, Claude Code, and HomeAssistant integration
- AI VTuber technology stack — Technical architecture combining Vue.js, TypeScript, Rust, Three.js, Live2D/VRM models, and cross-platform runtimes (Electron, Capacitor) for AI virtual characters
- AI 代理工作流生命周期 — 将软件开发生命周期拆解为顺序阶段:spec(规格定义)→ plan(任务拆解与优先级)→ build(增量切片实现)→ test(TDD 验证)→ review(代码审查与质量门禁)→ ship(CI/CD、文档、发布),通过简化循环反馈优化复杂度。
- Alertmanager alert routing — Alert management component that receives alerts from Prometheus Server, performs deduplication and grouping, routes to receivers (email, pagerduty), and handles notification timing with configurable wait/repeat intervals.
- Alertmanager configuration and routing — Alert management component receiving alerts from Prometheus, supporting deduplication, grouping, and routing to multiple receivers (email, pagerduty) with configurable resolution timeout, repeat intervals, and SMTP configuration
- Aliyun Yum Mirror Configuration — Process of configuring CentOS systems to use Aliyun's mirror repository as the yum package source, including backup and replacement of repository files.
- All-in-One Kubernetes Installation — Single-node deployment method for Kubernetes and KubeSphere on Linux that installs both control plane and worker components on one machine, suitable for development, testing, and learning environments.
- AMQP Protocol — The Advanced Message Queuing Protocol, a standardized application-layer protocol for message-oriented middleware that defines the format and behavior of message communication between clients and brokers.
- Analysis paralysis in long-context models — Models processing 100K+ tokens may enter infinite thinking loops requiring specific parameter constraints to prevent breakdown
- Andrej Karpathy Skills — A lightweight instruction layer for AI coding agents based on Andrej Karpathy's observations of common failure modes, designed to make agents behave more like cautious engineers than overconfident writers.
- Android app data directory structure — The standard filesystem organization pattern /Android/data/[package.name]/ used by Android applications to store private app data and downloads.
- Android traffic capture with proxy tools — Technique for intercepting and analyzing network traffic from Android applications by configuring proxy settings between the emulator and tools like Fiddler
- Angular dependency injection providers — The four Angular 1.x provider types for creating and injecting services and values across an application: value, constant, factory, and service.
- Angular form validation directives — Common Angular directives for form validation including ng-disabled for conditional states and required for mandatory field validation.
- Angular scope data sharing pattern — The technique of sharing data between multiple controllers by binding them to the same data source or parent scope, enabling synchronized state across components.
- Angular-JavaScript isolation boundary — The separation between Angular framework code and vanilla JavaScript where functions, variables, and events cannot directly interact, requiring controller mediation for communication between the two contexts.
- Annotation Processor — Java compiler plugin that processes annotations at compile time to generate new source files, perform validation, or create resources.
- AnnotationConfigApplicationContext parent-child relationship — Using setParent() to establish hierarchical relationships between annotation-based application contexts, where child contexts inherit environment configuration from parent contexts.
- Apache Commons FileUpload — Server-side file upload handling using Apache ServletFileUpload with DiskFileItemFactory for parsing multipart/form-data requests and extracting FileItem objects
- Apache Kafka — A distributed event streaming platform used for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications
- Apache Kafka architecture components — Core structural elements of Apache Kafka messaging system including producers, consumers, consumer groups, topics, brokers, partitions, and offsets that work together to enable distributed event streaming.
- Apache Kafka Distributed Messaging — Distributed event streaming platform for high-performance data pipelines using topics, partitions, brokers, and consumer groups, with ZooKeeper integration for cluster coordination and support for both real-time and batch processing workloads.
- Apache Spark components — The unified analytics engine Apache Spark and its main modules including core data processing (Spark code), SQL query interface (Spark SQL), and real-time stream processing (Spark Streaming), with Scala as the primary implementation language.
- Apache Storm Architecture — Distributed real-time stream processing system with master-worker architecture (Nimbus, Supervisors, Workers) and core data flow components (Spouts, Bolts, Tuples, Streams, Topologies)
- Apache Thrift Architecture — Facebook's cross-language RPC framework featuring multiple protocol formats (Binary, Compact, JSON), transport layers (Socket, File, Memory), and server models (Simple, ThreadPool, Nonblocking, THsHa).
- Apollo client integration pattern — Application configuration through environment variables (C_OPTS) specifying Apollo meta server URL and environment designation (fat/pro/dev), enabling runtime configuration retrieval without code changes.
- Apollo configuration center architecture — Three-tier distributed configuration management system consisting of ConfigService (provides configuration to clients via push/pull), AdminService (manages configuration changes and persistence), and Portal (web UI for configuration management), all backed by ConfigDB database.
- Apollo Configuration Center Integration — Configuration management solution using Apollo ConfigService and Portal deployed on K8S, enabling environment-specific configuration distribution to microservices via ConfigMap integration.
- Apollo containerized deployment on Kubernetes — Complete Docker containerization and Kubernetes deployment of Apollo components (ConfigService, AdminService, Portal) using custom Dockerfiles based on JRE8, ConfigMap-based configuration injection, and Ingress exposure through domain names.
- Apollo multi-environment deployment strategy — Deploying separate Apollo instances (ConfigService/AdminService) for test and production environments using Kubernetes namespaces (test/prod), environment-specific databases (ApolloConfigTestDB/ApolloConfigProdDB), and isolated ingress endpoints (config-test.od.com/config-prod.od.com).
- Appium mobile browser testing — Extension of Selenium WebDriver for automating mobile browsers (Chrome on Android) with shared protocols and commands
- Application protocol framing and parsing — Techniques for structuring and parsing messages in application protocols, including delimiter-based framing for text messages and explicit length fields for binary messages.
- Application startup callbacks — Interfaces (ApplicationRunner and CommandLineRunner) for executing code after Spring Boot application startup but before it begins accepting requests, useful for initialization tasks.
- ApplicationContext lifecycle management — Methods for controlling context lifecycle including refresh() for initialization, close() for shutdown, and isActive() for checking runtime status across hierarchical context structures.
- ApplicationContext parent-child hierarchy — Child contexts can see beans defined in parent contexts, but parent contexts cannot access beans from their children, enabling layered application architecture with controlled bean visibility.
- ArgoCD — A declarative GitOps continuous delivery tool for Kubernetes that synchronizes and maintains desired application states from Git repositories.
- ArgoCD Configuration Repository — A Git repository structure that contains declarative configuration files (manifests) defining desired application states for ArgoCD to synchronize with the Kubernetes cluster.
- ArgoCD declarative installation — The GitOps principle of deploying ArgoCD itself through Kubernetes manifest application using YAML from the official repository, rather than imperative commands.
- ArgoCD Initial Access Configuration — Authentication setup process for ArgoCD involving retrieving the initial admin password from a Kubernetes secret and using port-forwarding to access the web UI.
- ArgoCD initial admin authentication — The process of retrieving the default admin password from the argocd-initial-admin-secret Kubernetes secret using kubectl and base64 decoding.
- ArgoCD initial authentication workflow — The process of accessing ArgoCD's default admin credentials using a Kubernetes secret (argocd-initial-admin-secret) and retrieving the initial password via base64 decoding.
- ArgoCD installation and authentication — Deployment process for ArgoCD continuous delivery tool on Kubernetes clusters using kubectl manifests, with initial admin access through Kubernetes secrets (argocd-initial-admin-secret) requiring base64 decoding for password retrieval.
- ArgoCD installation on Kubernetes — Step-by-step process for deploying ArgoCD into a Kubernetes cluster using kubectl, including namespace creation, manifest application, and initial access setup
- ArgoCD server service configuration — Kubernetes service exposure methods for ArgoCD, including LoadBalancer type and port-forwarding techniques for accessing the ArgoCD server interface.
- Armory Spinnaker — A commercial distribution of Spinnaker continuous delivery platform with enhanced features and simplified deployment patterns for Kubernetes environments.
- Artifact Hub — A web-based repository for discovering, finding, and distributing Helm charts and other Kubernetes packages, serving as a central registry for the community.
- Asynchronous I/O model — The fundamental design approach in Netty where all I/O operations are non-blocking and asynchronous, allowing operations to proceed without waiting for completion
- Asynchronous operation state machine — The four-state lifecycle model for asynchronous operations: uncompleted, successfully completed, failed (with cause), or cancelled, tracked via isDone(), isSuccess(), isCancelled(), and cause().
- Asynchronous Report Download System — A distributed system architecture for handling large report generation requests through asynchronous processing, featuring query registration, status tracking, background job processing, and cloud storage delivery
- Asynchronous Report Generation Architecture — A message queue-based report generation pattern where REST API endpoints enqueue report requests, process them asynchronously via background workers, and store results in cloud storage with status tracking in a database.
- Asynchronous Report Generation Pattern — A message-driven architecture pattern for handling large-scale report generation that decouples request processing from document creation through asynchronous messaging queues.
- Atomic note principle — The practice of recording only one idea per note to enable flexibility, reusability, and independent understanding of each knowledge unit
- AtomicIntegerFieldUpdater — A Java concurrency utility that enables atomic updates to specified volatile int fields of selected classes without creating new AtomicInteger objects, useful for reducing memory overhead in existing data structures.
- Authorization header override — Ext Authz server capability to add or modify HTTP headers in user requests based on check request content, used for testing header manipulation behavior in authorization filters.
- Automated Payment Callback System — A system that automatically processes payment callbacks and notifications, eliminating manual verification steps and enabling real-time payment confirmation and order status updates.
- Automated SSL with Let's Encrypt — Integration between Traefik and Let's Encrypt certificate authority to automatically provision and renew SSL/TLS certificates for proxied services without manual intervention.
- Automatic sidecar injection — An Istio feature that automatically adds the Envoy proxy sidecar to pods in marked namespaces, eliminating the need for manual injection during deployment.
- automatic sidecar injection requirement — The prerequisite that Istio automatic sidecar injection must be enabled in the cluster for the sample service commands to work without modification, otherwise manual sidecar injection is required.
- automatic-導航地圖 — A navigation map or index for automatic-related documentation, serving as an organizational hub to structure and access DevOps content related to automation topics.
- Autonomous agent lifecycle — Complete task management workflow from enqueue → claim → start → complete/fail with WebSocket real-time progress updates and agent-initiated status reporting.
- autoscaling prerequisites for Istio services — Kubernetes Horizontal Pod Autoscaler requires CPU requests on all containers including the injected istio-proxy sidecar container for proper autoscaling behavior.
- B+ tree index structure — The underlying data structure used by MySQL InnoDB for indexing, featuring a directory-plus-data structure with bidirectional links between nodes at the same level, organized in a B+ tree format that can store millions of records at height 3.
- B2B order workflow — The structured sequence of steps and processes involved in business-to-business order processing, from initial order placement through fulfillment and delivery.
- Back-to-back order — A B2B order fulfillment arrangement where a supplier receives a customer order and simultaneously places a corresponding order with their own supplier, with goods shipped directly from the supplier to the end customer without intermediate storage.
- Backlinking — A core feature enabling bidirectional connections between notes where links are automatically tracked as backlinks on the target note, forming a knowledge web through interconnected references
- BalancedResourceAllocation scheduling priority — Scoring algorithm that favors nodes with balanced resource allocation across CPU, memory, and volume fractions, preventing skewed utilization where one resource type is overcommitted while others remain underutilized.
- BaseStream interface — The base interface for all stream types in Java 8, extending AutoCloseable and defining core methods like sequential() that return new stream instances.
- Bean visibility in context hierarchy — Child contexts can access and retrieve beans defined in parent contexts, but parent contexts cannot access beans defined in their children, creating a one-directional visibility model.
- BeanDefinitionRegistryPostProcessor — Spring extension point for modifying bean definitions during container initialization, before regular beans are instantiated
- Behavioral Design Patterns (行为型模式) — Category of design patterns concerned with algorithms and assignment of responsibilities between objects, including State Pattern, Visitor Pattern, Interpreter Pattern, and others that define how objects interact and communicate.
- Benchmark vs real-world model evaluation — MMLU scores don't always reflect practical performance; Gemma 4 26B outperforms higher-scoring models in long-document tasks like codebase analysis and financial report processing
- BestEffort QoS Pods — Lowest-priority Kubernetes pod classification where no Request or Limit values are set for any containers, making these pods the first to be evicted when system resources are constrained.
- Bidirectional handlers — ChannelHandlers that implement both inbound and outbound interfaces (like InboundOutboundHandlerX) and participate in both event flows, appearing in both execution sequences.
- Bidirectional linking — A core feature enabling bidirectional connections between notes where links are automatically tracked as backlinks on the target note
- Big data core challenges — The two fundamental challenges in big data systems: data storage (managing volume, persistence, and distributed access) and data computation (processing, analyzing, and deriving insights from large-scale datasets).
- Big data framework comparison — Comparison of the two major big data processing frameworks: Hadoop (disk-based batch processing with comprehensive ecosystem) versus Spark (memory-based processing with unified APIs for batch, streaming, and SQL workloads).
- Bilibili download tool — Software or utilities for downloading and managing video content from the Bilibili platform, particularly for accessing downloaded files on Android devices.
- Bilibili mobile download directory — The default Android filesystem path where the Bilibili app stores downloaded video files on mobile devices.
- blackbox-exporter — Prometheus exporter for monitoring endpoint liveness and availability over HTTP, HTTPS, TCP, DNS, ICMP and other protocols, configured via ConfigMap with probe modules
- Block reference anchors — Obsidian's paragraph anchoring system using ^anchor-name syntax to create referenceable, clickable locations within documents, commonly paired with interactive elements like buttons.
- Blocking and Non-blocking I/O — Process behavior distinction describing whether a thread waits suspended (blocking) or continues execution (non-blocking) while data is not ready during I/O operations.
- Bloom filter — A space-efficient probabilistic data structure that tests set membership with a configurable false positive rate but no false negatives, commonly used to prevent cache penetration.
- Blue-Green Deployment — A deployment strategy where the new version runs alongside the old version, with traffic switching from old to new at the load balancer layer only after the new version passes testing, ensuring instant rollback capability.
- Blue-green deployment strategy — Deployment pattern where new versions run alongside existing versions in separate namespaces (test/prod) with controlled traffic migration via Ingress and Service resources, enabling zero-downtime releases.
- Blue/Green Deployment (Kubernetes) — A zero-downtime deployment strategy where new and old versions run simultaneously, with traffic switching via Service label selectors to enable instant rollback
- Blue/Green Deployment Resource Overhead — Blue/green deployments require temporarily running double the resource capacity during the transition period when both old and new versions are active simultaneously, making it suitable for environments with sufficient resources.
- Bookinfo Docker image build and push workflow — Process for building, tagging, and pushing custom Bookinfo application container images to Docker registry using build-services.sh and build_push_update_images.sh scripts.
- Bookinfo sample application — A canonical Istio demo application used for demonstrating service mesh capabilities, observability, and traffic management patterns in Kubernetes environments.
- Bookinfo service architecture — Bookinfo application consists of multiple services: productpage (frontend), details, reviews (with v1, v2, v3 versions), and ratings (with v1, v2 versions), demonstrating version deployment patterns.
- Bookinfo testing and validation procedures — Testing methodology for Bookinfo deployment including pod status verification, CLI-based connectivity testing using kubectl exec, and browser-based validation.
- Bookinfo version independence from Istio — Bookinfo sample versioning is independent of Istio versions, allowing the sample to work with any Istio version.
- bot development with local tunneling — Development of chat bots (LINE, Telegram, Slack) using local tunnels to receive and test callback events without deploying to production servers.
- bounded-type-parameters — Generic type declarations constrained by extends or super keywords (e.g., ) to limit the acceptable types to a specific hierarchy or interface.
- branch-naming-convention — A systematic approach to naming Git branches using ticket IDs and descriptive prefixes (e.g., feature/29153_csv_export) to organize development work across multiple repositories.
- Bridge network interface configuration — Static network configuration for Linux bridge devices (br0) including IP addressing, gateway settings, and binding physical network interfaces to the bridge in sysconfig scripts.
- bright-boy技术文档 — Online technical documentation website at bright-boy.gitee.io providing detailed explanations of design patterns and other programming concepts with structured, Chinese-language content suitable for developers learning software architecture principles.
- Browser extension clipboard integration — Integration between browser extensions and system clipboard for seamless transfer of captured web content to other applications.
- Browser Hot Reload — A development technique that automatically refreshes the browser when source files change, eliminating the need for manual refresh during development
- Browser HSTS cache management — Procedures for clearing HSTS settings from web browsers (Chrome, Safari, Firefox) during development or troubleshooting, which is necessary since HSTS policies persist beyond server configuration changes.
- Browser tab metadata extraction — Techniques for extracting structured information (titles, URLs) from web browser tabs for note-taking and documentation workflows
- browser_action Chrome Extension component — Manifest configuration that creates a toolbar button with popup interface, defined by default_icon and default_popup properties
- Browser-based AI inference — WebGPU-based local LLM inference running entirely in web browsers without server dependencies
- Browser-based IDE development with Gitpod — Cloud development platform enabling rapid execution of GitHub projects directly in the browser without local cloning, featuring ephemeral workspaces configured through .gitpod.yml files.
- browser-based-content-extraction — Techniques and tools for capturing, formatting, and transferring web content directly into personal knowledge systems, often using browser extensions to bridge web reading and note-taking workflows.
- Browser-native ES6 modules — Using JavaScript ES6 modules directly in the browser without build tools or transpilation like Babel, leveraging the browser's native module compilation capabilities.
- Browser-Sync — A development tool that provides synchronized browser testing, live reloading, and cross-device/device interaction capabilities through a proxy server
- Build-free Vue development — Developing Vue applications without webpack, babel, or other build tools by leveraging native browser support for ES6 modules and Vue's CDN distribution
- Builder-Validator Chain — A Split & Merge pattern variation where one subagent builds code while another validates it, with the main agent mediating the workflow to enable automated construction with quality checking.
- Burstable QoS Pods — Intermediate Kubernetes pod classification for non-Guaranteed pods that have at least one container with memory or CPU request set, providing minimum resource guarantees with potential to use more when available.
- Button block ID references — A convention using block references (syntax like ^button-zcju) in Obsidian to create unique identifiers for button blocks, enabling them to be referenced or embedded elsewhere in the knowledge base.
- Button code block syntax — A special markdown-based syntax in Obsidian using fenced code blocks with the 'button' language identifier to define button properties like name, type, action, color, and plugin integrations.
- Button syntax and configuration — Code block syntax format using ```button delimiters to define button properties like name, type, action, color, and templater integration settings.
- Buttons plugin (Obsidian) — An Obsidian plugin by shabegom that enables users to create interactive buttons within notes for actions like opening links, running commands, or inserting content using code blocks with special syntax.
- ByteBuf clear() operation — Resets buffer state by setting both readerIndex and writerIndex to zero, converting all buffer space to writable without erasing the underlying content data.
- ByteBuf discardable bytes management — Technique for reclaiming memory space from already-read data using discardReadBytes() to compact the buffer and increase writable capacity.
- ByteBuf read/write pointers — ByteBuf uses readerIndex and writerIndex pointers to partition buffer space into discardable bytes, readable content, and writable regions, with operations like getByte() for absolute access and readByte() for relative access that advances the pointer.
- ByteBuf reference counting — Memory management mechanism in Netty's ByteBuf using ReferenceCounted interface to track buffer lifecycle and enable explicit resource cleanup.
- ByteBuffer types and allocation — DirectByteBuffer (allocateDirect) for zero-copy heap memory and HeapByteBuffer (allocate) for Java heap allocation, with methods like asReadOnlyBuffer and compact for buffer manipulation.
- CA key generation with password protection — Using OpenSSL genrsa with -des3 flag to create password-protected CA private keys, requiring passphrase entry for encryption security.
- CA Root Certificate Installation — The process of installing Certificate Authority root certificates on a device to establish trust for HTTPS inspection, referenced here in the context of Fiddler setup.
- Cache penetration — A performance problem where queries for non-existent data repeatedly bypass the cache and hit the database directly, often mitigated using Bloom filters to quickly detect invalid keys.
- cAdvisor — Container monitoring agent that collects resource usage metrics from containers by mounting host filesystems (/rootfs, /var/run, /sys, /var/lib/docker), deployed as DaemonSet with tolerations for master nodes.
- cAdvisor (Container Advisor) — Google's container monitoring tool that collects resource usage and performance data from running containers, deployed as DaemonSet with access to host filesystems for comprehensive container-level metrics
- Caffeine cache — A high-performance Java caching library used as a local in-memory cache layer, commonly integrated with Spring Boot applications
- Callback function equivalence (JavaScript vs Java) — The conceptual parallel between JavaScript callback functions and Java lambda expressions/Functional Interfaces as mechanisms for passing behavior and methods as parameters.
- Callout Types — Predefined semantic categories for callouts including note, abstract, info, todo, tip, success, question, warning, failure, danger, bug, example, and quote, each with distinct visual styling
- Canary Deployment — A gradual rollout strategy that directs a small percentage of traffic to the new version to monitor for issues before fully committing, named after 17th-century mining practice of using canaries to detect gas leaks.
- Canary Deployment Lifecycle — The step-by-step process for implementing canary deployments: deploy v1 with Ingress, deploy v2 alongside v1, add Canary Ingress with traffic rules, validate v2, then migrate primary Ingress to v2 and decommission v1.
- canary deployment with autoscaling — Pattern demonstrating gradual traffic rollout between multiple service versions (v1, v2) in conjunction with Kubernetes Horizontal Pod Autoscaler.
- Canister Docker registry — A cloud-based Docker registry service offering free private repositories, accessible via Yahoo email authentication and standard Docker login/push commands.
- Caveman Compression — A semantic compression technique for LLM contexts that removes predictable grammar (articles, conjunctions, passive voice) while preserving unpredictable factual content (numbers, names, technical terms), achieving 15-58% token reduction without semantic loss.
- CentOS 7 hostname management — Commands and procedures for checking and modifying the system hostname on CentOS 7 using hostnamectl utility and /etc/hosts configuration
- CentOS Repository File Management — Standard location and structure of CentOS yum repository files in /etc/yum.repos.d/ directory and procedures for modifying them.
- Centralized vs. Decentralized SOA Implementation — Two architectural approaches to implementing SOA principles: centralized (exemplified by ESB) focuses on connectivity and shared logic through a central hub, while decentralized (exemplified by microservices) prioritizes extensibility and distributed service governance without a central coordinator.
- Certificate Authority (CA) — An entity that issues digital certificates, essentially serving as a trusted third party that verifies the identity of certificate holders and binds public keys to identities.
- Certificate Authority (CA) Hierarchy — The trust infrastructure consisting of Root CA certificates embedded in operating systems as the foundation of trust, which sign server certificates to establish their authenticity, with intermediate CAs forming the chain between root and end-entity certificates.
- Certificate chain verification in service mesh — Hierarchical trust model establishing workload identity through certificate chains from workload certificates up through intermediate CA certificates to root CA certificates, enabling cryptographic validation of service-to-service communication.
- Certificate Metadata Structure — The essential components embedded in digital certificates including CA signatures, fingerprints, serial numbers, validity periods, and registered user information
- Certificate Signing Request (CSR) workflow — Two-step certificate issuance process: first generate a certificate signing request (CSR), then use CA private key to sign and issue the final certificate via openssl ca command.
- Channel initialization lifecycle — The sequence of operations that occur when a Netty channel is instantiated, including ID generation, unsafe object creation, and pipeline initialization
- Channel parent hierarchy — The parent-child relationship model in Netty channels where channels can have a parent channel (null for top-level channels), used for organizing channel structures
- Channel pipeline and ChannelHandlerContext — The bidirectional chain of ChannelHandlerContext objects containing handlers, with head and tail nodes, managing event propagation through EventExecutor threads in Netty's processing model.
- Channel Unsafe abstraction — An internal low-level I/O operations interface in Netty channels that handles the actual unsafe network operations, instantiated during channel creation
- ChannelFactory pattern — A factory pattern used in Netty for creating new Channel instances, demonstrated by channelFactory.newChannel() which produces specific channel types like NioServerSocketChannel
- ChannelFuture — Specialized Netty Future interface for channel I/O operations that extends io.netty.util.concurrent.Future with channel-specific functionality
- ChannelHandlerContext — Context object passed between handlers that enables event propagation through fireIN_EVT() for inbound events and OUT_EVT() for outbound events, mediating handler interaction.
- ChannelPipeline initialization — The automatic creation of a ChannelPipeline object during channel instantiation, which serves as the container for channel handlers and processing logic
- Chinese database education resources — Chinese-language video tutorials and courses for database technologies, particularly focusing on MySQL instruction on platforms like Bilibili
- Chocolatey — A package manager for Windows that simplifies the installation of software and development tools from the command line, analogous to apt-get or brew on other platforms.
- Chrome DevTools Panels API — Chrome extension API that allows developers to create custom panels within the browser's DevTools interface, extending debugging and development capabilities.
- Chrome Extension Development — Browser extensions built for Google Chrome that extend browser functionality, often including developer tools, productivity enhancements, or system integrations.
- Chrome Extension file structure — Standard organization comprising manifest.json (configuration), popup.html (UI), popup.js (popup logic), icon.png, and optionally jQuery or other libraries
- Chrome Extension manifest.json configuration — Configuration file defining plugin metadata, permissions, icons, browser actions, and content script injection rules
- Chrome Extension Plugin Structure — A Chrome extension plugin is composed of several files including manifest.json (main configuration), popup.html/popup.js (user interface), and content scripts that run in the context of web pages.
- Chrome extension research tools — Browser extensions designed to facilitate research workflows by providing enhanced text selection, highlighting, and content transfer capabilities beyond native browser functionality.
- Chrome permissions Manifest Configuration — The permissions array in manifest.json specifies which Chrome APIs and host permissions the extension requires, such as 'activeTab' for access to the currently active tab.
- Chrome research extensions for Obsidian — Browser Chrome plugins that integrate with Obsidian workflows including TabCopy for bulk copying tab URLs and titles, and Roam-Highlighter for capturing formatted content directly into notes.
- Chrome Web Store — Google's official marketplace for browser extensions, themes, and applications for the Chrome browser.
- chrome.tabs.executeScript API — Chrome API method that programmatically injects and executes JavaScript code into a specific tab, enabling automation like form filling and button clicking
- chrome.tabs.getSelected API — Deprecated Chrome API method for retrieving the currently active tab object, providing tab ID for script execution targeting
- ChromeDriver — A standalone server that implements W3C WebDriver standard for controlling Chrome browser programmatically, used primarily with Selenium for automated testing.
- ChromeOptions configuration — ChromeDriver configuration class for setting browser arguments like language (--lang), headless mode, and GPU acceleration control
- CI/CD Pipeline Architecture — Continuous integration and delivery workflows combining Git, Jenkins, container registries (Harbor), and deployment automation (Spinnaker/ArgoCD) to automate application delivery from code commit to production deployment.
- CI/CD pipeline integration — Continuous integration and delivery workflow combining Git, Jenkins (build), container registry (Harbor), and Spinnaker (deployment) to automate application delivery from code commit to Kubernetes cluster deployment with rollback capabilities.
- Cipher suite — A combination of cryptographic algorithms that specify the encryption, authentication, and key exchange methods used in a secure SSL/TLS connection, configurable via command-line tools.
- Claude Code Agent Patterns — Five progressive agent workflow patterns in Claude Code ranging from sequential execution to fully autonomous operation, each suited for different task complexity levels.
- Claude Code Built-in Subagents — Three pre-configured subagents (Explore with Haiku for read-only file exploration, Plan with Haiku for codebase research, and General Purpose with Sonnet for complex multi-step tasks) that Claude Code automatically dispatches based on task requirements.
- Claude Code plugin system — Claude Code supports installing behavior plugins through marketplace commands that modify agent behavior by injecting instruction layers, demonstrated with the andrej-karpathy-skills plugin.
- CLI input validation patterns — Techniques for validating command-line input including checking os.Args length, validating required flags, and using PrintDefaults() to display usage information when validation fails.
- CLI subcommand pattern — A design pattern for structuring command-line tools using subcommands (e.g., 'videos get', 'videos add') with switch statement routing to separate handler functions.
- Closed Learning Loop — A self-improving AI agent mechanism where skills are automatically documented and reused after each task completion, building persistent memory across sessions without manual configuration.
- Cloud development environment — Remote development environment paradigm where developers write and test code directly in a Kubernetes cluster rather than local machines, providing cloud-native context during development.
- Cloud Native Buildpacks — A framework and ecosystem for transforming source code into OCI-compliant container images automatically by detecting build dependencies and configurations without manual Dockerfile authoring
- Cloud Native Computing Foundation (CNCF) — Organization founded by Google and the Linux Foundation in 2015 to manage Kubernetes and other cloud-native technologies, providing governance and support for the cloud-native ecosystem.
- Cloudflare API integration — Programmatic access to Cloudflare services through their REST API, enabling automation of DNS record management, zone configuration, and other Cloudflare services without manual web interface interaction.
- Cloudflare DNS — Cloudflare's DNS management dashboard for configuring domain name resolution, DNS records, and obtaining API tokens for programmatic access
- Cloudflare KV (Key-Value Store) — Distributed key-value database optimized for low-latency reads, with free tier of 1,000 writes and 100,000 reads per day
- Cloudflare Learning Center — Cloudflare's educational platform providing cryptographic and security learning resources in Chinese, serving as a primary reference for TLS, SSL, and modern web security protocols.
- Cloudflare Workers — Serverless edge computing platform that executes JavaScript/TypeScript at 300+ global locations, with free tier of 100,000 requests per day
- Cloudflare Workers URL Shortener Pattern — Serverless URL shortening architecture using Workers for POST (create short code) and GET (302 redirect) operations with KV storage for key-value mappings
- Cluster Autoscaler — Cluster-level autoscaler that adjusts node-pool quantities by provisioning new nodes during high load (scale-up) and removing underutilized nodes during low load (scale-down), with scale-down triggered when CPU and memory requests fall below 50% threshold.
- Cluster Autoscaler (CA) — Cluster-level autoscaler that adjusts node-pool quantities by provisioning new nodes during high load and removing underutilized nodes during low load, with scale-down triggered when CPU and memory requests fall below 50% threshold.
- Cluster Autoscaler scale-down protection — Mechanisms to prevent unwanted node removal including PodDisruptionBudget validation, Pod affinity/anti-affinity checks, and cluster-autoscaler.kubernetes.io/scale-down-disabled annotation to manually protect nodes from scale-down.
- cluster-admin role — A built-in Kubernetes ClusterRole that provides full administrative permissions across all namespaces and resources in the cluster, available by default without requiring custom creation
- Cluster-Internal Service Access — The networking pattern where services running within a Kubernetes cluster can communicate with each other using internal DNS names, but are not directly accessible from outside the cluster without an ingress mechanism.
- cluster-level-logging — A Kubernetes logging architecture design where the logging system operates independently of container, Pod, and Node lifecycles to ensure log persistence across application failures and restarts.
- cluster-level-logging architecture — Kubernetes logging system design where log collection operates independently of container, Pod, and Node lifecycles, ensuring log persistence and availability regardless of component failures or restarts.
- Clustered vs non-clustered index — The distinction between index storage architectures in MySQL: clustered indexes (InnoDB's default) where the data is stored within the B+ tree leaf pages ordered by primary key, versus non-clustered indexes (MyISAM) where index and data are stored separately.
- ClusterIP service type — The default Kubernetes Service type that exposes the service on an internal cluster IP, making it accessible only from within the cluster using serviceName:port or serviceName.namespace.svc:port notation.
- ClusterRole binding — An RBAC mechanism that binds a ServiceAccount to a ClusterRole, granting the service account the permissions defined in that role across the entire cluster
- Coding Agent failure patterns — 编码Agent的三大典型失败模式:过度解释(over-explain)导致不行动、丢失任务线程(lose thread)中途改目标、工具使用错误(bad tool use)描述调用但不执行
- cognitive operating system extraction — A five-layer model for distilling expertise: expression DNA (tone/rhythm/vocabulary), mental models, decision heuristics, anti-patterns/values, and honesty boundaries—going beyond quotes to derive predictive, actionable cognitive rules.
- Collapsible content blocks — Admonition configuration option that allows content to be expanded or collapsed, with control over default state (open/closed) for managing content visibility.
- Collector interface components — Four functional methods defining collector behavior: supplier creates containers, accumulator incorporates elements, combiner merges containers, and finisher performs final transformation.
- Collectors utility class — Java's final class providing common static collector implementations for stream reduction operations, including grouping, partitioning, and counting collectors.
- Collectors.groupingBy() patterns — Powerful multi-level grouping operations that categorize stream elements by one or more classifiers, optionally with downstream collectors like summingInt or mapping.
- Collision Detection Pattern in URL Shorteners — Conflict prevention mechanism that checks if short code already exists in KV before storage, returning 409 Conflict status to prevent overwriting existing URLs
- Command-line input validation — Validating CLI input parameters by checking for empty strings and printing usage information with PrintDefaults() before exiting with error codes.
- Compare-and-set (CAS) — A hardware-level atomic operation that atomically updates a variable only if its current value matches an expected value, serving as the foundation for lock-free concurrent algorithms
- Compile-time processing — Execution phase during Java compilation where annotation processors run to analyze and potentially modify code before the final bytecode is generated.
- Compiled Truth + Timeline Model — A knowledge representation pattern where each page contains two sections: 'compiled truth' (current best understanding, freely rewritten as new evidence emerges) and 'timeline' (chronological event log, append-only, never edited).
- Component registration via modules — Registering Vue components by importing them as ES6 modules and including them in the component object, enabling browser-native module composition
- Composite index leftmost prefix principle — A rule for multi-column indexes requiring queries to reference columns in the left-to-right order they were defined; queries using only prefix columns can utilize the index, but skipping or reordering columns causes index failure.
- Comprehensive troubleshooting documentation — Technical documentation practice that includes file analysis, machine-specific instructions, visual aids, and preemptive solutions for common error points.
- Compressible vs incompressible resources in Kubernetes — Classification where CPU is compressible (pods throttle when constrained) and memory is incompressible (pods terminated via OOM killer when exceeding limits), affecting how resource pressure impacts different workloads.
- Computational flow model — TensorFlow's execution model where computations are represented as data flowing through graphs, enabling efficient distributed processing and optimization.
- Config auto-reload sidecar pattern — A Kubernetes sidecar container pattern that watches for configuration changes and triggers hot-reload of applications without requiring pod restarts, demonstrated with Jenkins Configuration as Code.
- ConfigMap — Kubernetes API object for storing non-sensitive configuration data as key-value pairs or files, mounted as volumes to provide environment variables and configuration files to containers.
- ConfigMap creation methods — Four approaches to creating ConfigMaps in Kubernetes using kubectl: importing files with --from-file, literal key-value pairs with --from-literal, YAML manifests with file contents, and YAML manifests with key-value data.
- ConfigMap injection methods — Techniques for making ConfigMap data available to containers including environment variable injection via configMapKeyRef and volume mounting as files.
- ConfigMap Volume — Used to store configuration data as files, typically for environment variables or database initialization settings. Provides configuration decoupling from container images.
- Configuration consistency for troubleshooting — The requirement that learners match machine naming and configuration exactly to the instructor's setup to enable effective debugging support.
- Configuration decoupling pattern — Separating environment-specific configuration (database connections, tokens, API keys) from application code to enable seamless deployment across different environments
- Configuration hot-reload without rebuild — Apollo capability to update application configuration at runtime through the Portal UI without requiring container image rebuild or application restart, with clients receiving changes via push/pull mechanism.
- Connection borrowing pattern — An application pattern where database connections are temporarily borrowed from a connection pool rather than owned by the application, requiring explicit return of the connection after use to enable reuse by other clients.
- Console Grep plugin — An Eclipse marketplace plugin for searching and filtering console output, used as an example in this extraction workflow to demonstrate the manual plugin relocation process.
- Consumer
interface — A Java functional interface representing an operation that accepts a single input argument and returns no result (void), used primarily for side-effect operations on objects. - container — Lightweight application packaging units that share the host OS kernel while maintaining isolated filesystem, CPU, memory, and process spaces, providing faster startup and more efficient resource utilization compared to VMs.
- Container debugging techniques — Methods and tooling for debugging applications running inside Docker containers across different programming languages and frameworks, including IDE integration with VSCode.
- Container deployment benefits — Containers provide lightweight isolation by sharing the host OS while maintaining separate filesystems, CPU, memory, and process spaces, enabling rapid startup, consistency across environments, and efficient resource utilization.
- Container image build process — Using Dockerfiles to convert application source code into Docker container images for deployment
- Container image tools for Kubernetes — Specialized OCI-compliant tools for building and managing container images in Kubernetes environments including Buildah, Kaniko (builds images inside pods), Skopeo (copies images between registries), and Dive (analyzes image layers).
- Container lifecycle hooks — Kubernetes mechanisms for executing custom code at specific container lifecycle points: PostStart runs immediately after container creation, PreStop runs before termination for graceful shutdown.
- container orchestration — Automated management of containerized applications including scaling, deployment, rolling updates, rollback, and monitoring to ensure service continuity in production environments.
- Container orchestration deployment order — Sequential service startup pattern for distributed systems where dependent services must be deployed in specific order (e.g., Minio→Redis→Clouddriver→Front50→Orca→Echo→Igor→Gate→Deck→Nginx) to satisfy dependencies.
- Container Orchestration Necessity — Rationale for adopting Kubernetes as a container orchestration layer to handle load balancing, auto-scaling, database replication, and distributed system management beyond simple Docker containerization.
- Container prerequisites for Kubernetes — Essential requirements workloads must meet to run on Kubernetes: containerized applications, exposed ports, configuration via environment variables or mounted files, data persistence through volumes, and proper entrypoint definitions.
- Container registry integration with Harbor — Private Docker registry setup using Harbor for storing and versioning application images, configured with Kubernetes secrets for authenticated pulls.
- Container registry workflow — The process of building container images locally, pushing to remote registries (e.g., Docker Hub), and enabling Kubernetes clusters to pull and deploy images by reference, forming the foundation of containerized application distribution.
- Container Runtime Interface (CRI) — gRPC-based abstraction layer (RuntimeService and ImageService) separating kubelet from container runtimes, enabling pluggable container implementations without direct Docker API dependencies.
- Container volume mounting for development — Using Docker volume mounts (-v flag) to synchronize local files with container work directory, enabling code changes without rebuilding the container image
- Container vs virtual machine architecture — Containers are isolated processes sharing the host kernel with Namespace/Cgroups constraints, while VMs run complete guest operating systems under a Hypervisor with hardware virtualization, making containers more resource-efficient.
- Container-based development environment — Using containers (Docker) as isolated development environments to keep the host computer clean and avoid installing development tools, SDKs, and runtimes directly on the local machine.
- Containerization tools comparison — Overview of container runtime alternatives including Docker, Podman (daemonless container engine), and VSCode dev containers for development environment isolation.
- Containerized Apollo deployment workflow — Standardized delivery process: download release packages, customize startup scripts and Dockerfile, build and push images to registry, create Kubernetes manifests (ConfigMap, Deployment, Service, Ingress), and apply resources.
- Containerized AWS CLI for LocalStack — Running AWS CLI v2 in a Docker container with network connectivity to LocalStack and endpoint-url configuration for local AWS service simulation
- Containerized database deployment — The practice of deploying and managing database systems within Docker containers to ensure consistency across environments and simplify dependency management.
- Containerized Git service deployment — Using Docker containers to deploy and run Git hosting services, providing isolation, portability, and simplified infrastructure management.
- Content capture workflow optimization — Methods for streamlining the process of capturing web content by combining template-based extraction with keyboard-driven operations.
- Content Security Policy (CSP) — An HTTP header that uses whitelisting to control which resources (scripts, images, fonts, styles, frames) can be loaded on a website, preventing XSS attacks by restricting content sources.
- content_scripts Chrome Extension Scripts — Content scripts specified in manifest.json are JavaScript files injected into matching web pages, running in the context of the DOM rather than in the extension's popup, and can include libraries like jQuery.
- content_scripts in Chrome Extensions — JavaScript files injected into matching web pages (defined by matches patterns) to interact with page DOM, executing in page context not popup context
- Context display naming — Using setDisplayName() to assign descriptive names to ApplicationContext instances for debugging and identification purposes in hierarchical structures.
- Context isolation and independence — Each ApplicationContext maintains independent bean definitions and lifecycle, with beans registered in one context not directly visible to sibling contexts except through parent relationships.
- Context managers in Python (with statement) — The 'with' statement pattern for automatic resource management, ensuring files are properly closed even when exceptions occur
- Context Window Management in Claude Code — Techniques for managing Claude Code's context limitations through skills loading/unloading,
/compactfor conversation summarization, and/clearfor reset, with progression to multi-terminal workflows when context ceiling is reached. - context-background-for-timeout-control — Go's context.Background() creates an empty context used as the foundation for Redis operations, enabling timeout and deadline management for database calls through the context parameter.
- Contribution testing workflow — Quality assurance process requiring contributors to test changes with their own Docker images before requesting official builds, ensuring Istio sample applications work correctly.
- Controller mediation pattern — A design pattern in Angular where controllers serve as the bridge or translation layer between isolated Angular code and external JavaScript contexts, handling data flow and event communication.
- Cookie security attributes (HttpOnly and Secure) — Security flags for HTTP cookies: HttpOnly prevents JavaScript access to cookies, while Secure ensures cookies are only transmitted over HTTPS connections, both mitigating XSS attack risks.
- covering-index — An index optimization technique where queries can be satisfied entirely from the index without accessing the data rows (avoiding '回表' - table lookup), typically involving SELECT columns that are all present in the index.
- CPU pinning with cpuset — Performance optimization technique that binds Guaranteed QoS pods to exclusive CPU cores (integer CPU limits = requests), eliminating context-switching overhead. Recommended for DaemonSet pods to prevent meaningless eviction-reconstruction cycles.
- CPU pinning with cpuset in Kubernetes — Performance optimization technique binding Guaranteed QoS pods to exclusive CPU cores by setting integer CPU limits, eliminating context switching overhead for latency-sensitive workloads like DaemonSets.
- CPU request configuration for autoscaling — Kubernetes Horizontal Pod Autoscaler requires all containers in pods to specify CPU requests; both the application containers and injected istio-proxy sidecar containers must include cpu requests for autoscaling to function.
- Creator curation strategy — The practice of systematically following and organizing content from technical educators, YouTubers, and bloggers as part of a personalized learning ecosystem and resource collection.
- CRI (Container Runtime Interface) — gRPC-based interface layer between kubelet and container runtimes, consisting of RuntimeService (container operations: create, start, exec, delete) and ImageService (image operations: pull, remove). CRI design intentionally avoids Pod concepts to maintain stability despite frequent Pod API changes, focusing only on container-level primitives.
- Cron job scheduling — Unix/Linux time-based job scheduler that executes commands or scripts at specified intervals, commonly used for periodic maintenance tasks, automated updates, or monitoring routines.
- Cross-app clipboard integration — The ability to seamlessly transfer captured content from web browsers directly into note-taking applications through standardized clipboard operations.
- Cross-Language RPC Frameworks — Remote Procedure Call technologies enabling interoperability between different programming languages, including XML WebService, JSON RESTful, Thrift, Protobuf, Avro, and gRPC.
- Cross-Platform Creator Following — Practice of tracking technical content creators across multiple platforms and channels (YouTube, GitLab, GitHub, personal blogs) to access different types of content they produce.
- Cross-Platform Networking Tools — Networking utilities like netcat adapted for Windows environments, enabling cross-platform network operations
- Cross-prompt KV caching — CacheWrapper maintains KV cache state across multiple generations, computing common prefixes between prompts to avoid reprocessing tokens and enabling incremental updates with cancellation support.
- cross-provider handoff — 同一 conversation 可在不同 LLM provider 间无缝切换(如从 Claude 到 GPT 再到 Gemini),thinking blocks 自动转换为
标签,实现多模型协作场景下的上下文连续性。 - Cryptographic hash functions — One-way functions that generate fixed-size digests from input data, including legacy MD family, SHA-1/2/3, BLAKE2, Whirlpool, and various regional standards (GOST, SM3).
- Cryptography and Security Fundamentals — Security concepts covering public key infrastructure, certificate authorities, SSL/TLS protocols, cryptographic algorithms (RSA, AES, SHA), Java Cryptography Architecture (JCA/JCE), OpenSSL tools, and secure communication practices.
- Cryptography resource hub — A comprehensive index of cryptographic learning materials and practical resources including SSL/TLS implementation, certificate management, OpenSSL tools, Java security architecture (JCA/JCE), and public key infrastructure (PKI).
- CSR (Certificate Signing Request) — A certificate signing request file containing domain information (CN), organization details, and location data that must be submitted to a Certificate Authority to obtain an SSL certificate, generated using the private key but only containing the public key.
- CSV data persistence pattern — A common pattern for persisting application data to CSV files, including reading files into dictionaries and writing collections back to disk
- CSV to JSON migration pattern — Transitioning data storage formats from CSV to JSON, requiring changes to both serialization logic (writer vs json.dumps) and file reading logic (DictReader vs json.loads)
- Cursor-based pagination — Pagination technique using the last row's primary key as a parameter for the next query, avoiding OFFSET scans by filtering WHERE id > last_id.
- Custom Claude Code Agents — User-defined agent specifications stored in
.claude/agents/directory with descriptive names and instructions that Claude can automatically match or manually invoke for specialized tasks. - Custom Domain vs Routes in Cloudflare Workers — Two methods for binding domains: Custom Domain (entire domain handled by Worker) vs Routes (specific path interception), with Custom Domain recommended for URL shorteners
- Custom istio-agent template — Modified Envoy sidecar injection template that configures the istio-agent to retrieve certificates from SPIRE instead of Istio's default CA, enabling external certificate authority integration.
- Custom Jenkins Docker image build — Building a customized Jenkins Docker image using Dockerfile with LTS Jenkins and JDK11, tagged for container registry deployment
- Custom metrics with Prometheus Adapter — An alternative scaling approach using Prometheus Adapter for autoscaling based on custom application metrics beyond standard CPU/memory resource metrics provided by Metrics Server.
- Custom monitoring script pattern for process detection — Reusable PowerShell pattern using Get-Process, ForEach iteration, and Select-String pattern matching to monitor specific application processes
- Custom Spring Data repository — Implementation of custom repository classes extending Spring Data JPA functionality with domain-specific query methods
- Daemonless container runtime — Container management architecture that operates without a background daemon process, directly managing containers and pods through CLI or API calls.
- Dashboard RBAC Authorization Levels — Implementation of role-based access control for Kubernetes Dashboard with two tiers: admin users (cluster-admin) with full permissions and view-only users (dashboard-viewonly) with limited read-only access to resources.
- Data Serialization Formats — Comparison of binary and text-based serialization protocols including Thrift, Protobuf, Avro, JSON, and XML used for efficient data transmission in distributed systems.
- Data URI base64 encoding for images — Image encoding technique converting binary image data to base64 strings with MIME type prefixes (data:image/png;base64,) for inline HTML embedding
- Data-driven development mode — Angular's development philosophy and architectural pattern where applications are structured around data flow and state management as the primary organizing principle, with UI changes driven by underlying data rather than direct DOM manipulation.
- data-persistence-externalization-principle — Keeping application state out of the application process itself by using external storage (Redis) instead of local files, ensuring data survives application crashes and enabling multi-instance deployments.
- Database initialization for configuration centers — Process of setting up MariaDB with UTF8MB4 character set, creating dedicated databases (ApolloConfigDB, ApolloPortalDB), granting restricted user permissions, and populating initial schema and configuration data from SQL scripts.
- Database pagination offset problem — Query performance degrades exponentially as OFFSET values increase in traditional LIMIT/OFFSET pagination, because the database must scan and discard all preceding rows.
- Database transaction fundamentals — The core mechanisms underlying database transactions, centered around locking and concurrency control to ensure data integrity and consistency.
- DataJoint MySQL integration — A specialized Docker configuration for MySQL optimized for DataJoint workflow management in scientific computing and data pipelines.
- declarative configuration — Kubernetes approach where users declare the desired state of resources in configuration files, and the platform works to achieve and maintain that state automatically.
- Declarative deployment pattern — An infrastructure-as-code approach where users declare the desired state of resources (Pods, ReplicaSets) in YAML manifests, and Kubernetes controllers work to achieve and maintain that state automatically.
- declarative vs imperative Kubernetes management — Two approaches to managing Kubernetes resources: imperative commands (kubectl create/apply directly) versus declarative YAML manifests applied with kubectl apply -f
- declarative-resource-management — Infrastructure configuration approach using Terraform's declarative language to define desired state of resources, allowing the tool to automatically determine and execute necessary changes
- Deduplication via Parameter Hashing — A technique for preventing duplicate report generation by hashing query parameters (MD5) and checking for recent identical requests within a configurable time window before processing.
- Deep learning feature extraction — Core capability of deep learning systems to automatically learn and extract relevant features from raw data, distinguishing it from traditional machine learning approaches that often require manual feature engineering.
- Default behavior override — Customizing application default settings to align with personal workflow preferences, such as changing which document opens first when launching a tool.
- Default Kubernetes namespaces — Four pre-created namespaces in Kubernetes clusters: default (for user objects), kube-system (for system components), kube-public (for cluster-readable resources), and kube-node-lease (for node heartbeats)
- DefaultServeMux — Go's built-in HTTP request multiplexer that routes incoming requests to registered handlers based on URL patterns, used when passing nil as the handler to ListenAndServe.
- Deferred join optimization — A query pattern where the pagination operation is first performed on an indexed column (typically the primary key) using a subquery, then joined back to the full table to retrieve remaining columns, avoiding random row lookups during the scan.
- Delay Queue Pattern — A message queuing pattern where messages are held with a TTL before being routed to a dead letter queue for handling timeout scenarios, commonly used for detecting and managing failed or long-running background tasks.
- Delegation Configuration — Configuration parameters controlling multi-agent behavior including max_concurrent_children (parallelism limit), max_spawn_depth (nesting levels 1-3), orchestrator_enabled (global switch), subagent_auto_approve (dangerous command approval), and optional per-tier model/provider overrides.
- Demo-driven learning — An approach to technical learning and documentation that uses working example projects as the primary educational vehicle rather than theoretical explanations alone
- Deployment resource scaling considerations — Blue/green deployments require double resource capacity temporarily during transitions, making resource availability a key consideration for adoption
- Deployment Revision history tracking — The automatic recording mechanism in Kubernetes that creates revision records for Deployment changes, specifically triggered by spec.template modifications rather than all configuration changes.
- Deployment self-healing behavior — The automatic recreation of pods by Kubernetes deployments when individual pods are deleted, demonstrating the deployment controller's role in maintaining desired state.
- Deployment Strategies in Kubernetes — Various application rollout approaches including Recreate, Rolling Update (Ramped), Blue-Green, Canary, A/B Testing, and Shadow deployments, each with different trade-offs in downtime, resource requirements, and rollback capabilities.
- Deployment Strategy Selection Framework — A decision framework for choosing deployment strategies based on trade-offs between downtime tolerance, resource costs, rollback requirements, traffic control needs, and business risk tolerance.
- Design Patterns (设计模式) — Classic software design patterns for solving recurring architecture problems, organized into categories including behavioral patterns, interpreter patterns, and visitor patterns as demonstrated in educational video courses and technical documentation.
- Design Patterns Catalog — Comprehensive reference to the 23 classic GoF design patterns organized into creational, structural, and behavioral categories, serving as foundational patterns for reusable object-oriented software architecture.
- Dev Container Images — Pre-built Docker container images maintained by devcontainers organization (GitHub) that serve as starting points for development environments, including Alpine-based images and other base configurations.
- Dev Container Rebuild — The process of reconstructing the container image and configuration when changes are made, typically triggered via VSCode command palette (F1 > Remote-Containers: Rebuild and Reopen in Container), with troubleshooting steps for failed builds.
- Developer bookmark organization — Systematic method for organizing browser bookmarks by technology stack (Spring) and topic to maintain quick access to learning resources and documentation
- Development Proxy Server — A local server that acts as an intermediary between the development browser and the backend application, enabling hot reload and other development features
- DevOps Bootcamp curriculum — An educational program or training framework for DevOps engineering, covering essential tools, practices, and methodologies required for modern infrastructure and operations roles.
- DevOps documentation backlog management — Organizing and tracking pending documentation and learning tasks related to DevOps technologies and practices through a structured TODO system with status indicators.
- DevOps documentation taxonomy — A hierarchical categorization system for technical documentation that organizes development operations content by categories and tags, enabling systematic navigation and retrieval of related technical resources.
- DevOps learning roadmap — A structured career progression path for DevOps engineers that outlines skill advancement from foundational concepts through intermediate to advanced topics, helping prevent overwhelm when learning new technologies
- DevOps learning roadmap methodology — Educational framework for DevOps engineers to learn new technologies without overwhelm, emphasizing structured skill progression and focused learning paths.
- DevOps MOC (Map of Contents) — A structured navigation hub organizing DevOps documentation into categorical sections including GitHub, networking fundamentals (free domains, DNS), containerization tools (vscode-devcontainer, podman), and cloud development environments (gitpod).
- DevOps MOC navigation map — A structured index document (000-MOC-devops) that serves as the central navigation hub for DevOps-related documentation, organizing links to GitHub resources, free domain setup, proxy tools, containerization tools, and CI/CD development environments.
- DevOps monitoring and observability — The practice of collecting, analyzing, and acting on telemetry data from distributed systems to understand service behavior and performance in production environments.
- DevOps Navigation Map — A structured indexing framework for organizing DevOps knowledge areas, using maps and categorical hierarchies to navigate technical documentation.
- DevOps networking and domain management — Essential networking capabilities for DevOps including free domain acquisition, DNS configuration, and secure tunneling solutions (ngrok) for exposing local development environments to the internet for testing and integration.
- DevOps TODO tracking workflow — A documentation workflow combining markdown-based TODO lists with GitHub Projects for tracking development tasks and learning resources, particularly for DevOps and Spring framework studies.
- DevOps 學習地圖 —
- DEVPOS — A technology domain or methodology centered around navigation maps and organizational structure, associated with Docker containers and automation systems.
- DEVPOS Navigation Map — A structured navigation system for organizing DEVPOS-related documentation and resources, serving as an index or hub for accessing different components like Docker and automation modules.
- DevTools Extensions — Browser extensions that extend Chrome's native Developer Tools functionality with custom panels, context menu items, or sidebar panes to provide specialized debugging capabilities.
- Digital Certificate — A cryptographic credential that binds a public key to identity information through metadata including CA signatures, fingerprints, serial numbers, expiration dates, and registered users
- Digital certificate and TLS protocol guide — Technical documentation covering RSA certificate chain generation, TLS protocol implementation, and practical cryptography for software developers, with detailed explanations of certificate structures and HTTPS security mechanisms.
- Digital Signature — A cryptographic technique that proves an input message originated from a private key holder by using the corresponding public key for verification
- discardReadBytes() memory compaction — Operation that reclaims memory space by discarding already-read bytes, shifting readable content to the beginning of the buffer and increasing writable capacity at the end.
- DispatcherServlet — The front controller servlet in Spring MVC that handles incoming HTTP requests and orchestrates request processing through various component strategies including handler mapping, execution, exception resolution, and view rendering.
- DispatcherServlet Request Processing — Core Spring MVC request dispatching mechanism including handler mapping, handler adapter invocation, interceptor chains, and view resolution in the doDispatch method
- Distinguished Name (DN) certificate attributes — Certificate identification fields including Country Name (C), State/Province (ST), Locality (L), Organization (O), Organizational Unit (OU), Common Name (CN), and Email Address used to establish certificate identity and trust scope.
- Distributed transaction — Transaction management across multiple distributed systems or databases, requiring coordination protocols to maintain ACID properties in a distributed environment.
- DNS zone configuration for services — Configuring BIND DNS zone files to add A records mapping service domain names to cluster IP addresses, enabling service discovery through domain resolution
- DNS-based service discovery for multi-environment Apollo — Using separate DNS entries (config-test.od.com, config-prod.od.com) pointing to Kubernetes Ingress to route configuration service requests to environment-specific backend pods, enabling single image deployment across environments.
- Dnsmasq Docker deployment — Containerized DNS forwarding service providing DNS and DHCP functionality with multiple Docker image variants available
- Docker Alias Pattern — Technique for creating shell aliases that wrap Docker run commands, providing seamless CLI access to containerized tools as if they were natively installed.
- Docker architecture components — Three core elements of Docker: client (CLI interface), containers (running instances managed by daemon), and registry (image repository service like Docker Hub).
- Docker as Helm prerequisite — The requirement that Docker must be installed before using Helm, as Helm typically works with containerized applications packaged in Docker images
- Docker bridge networking — Network architecture pattern where Docker containers connect through a bridge interface, allowing containers on the same host to communicate while providing isolation from external networks
- Docker bulk image and container removal — Commands for forcibly removing all Docker containers and images in a single operation using command substitution with docker ps and docker images queries.
- Docker cgroup driver configuration for Kubernetes — Resolving kubelet startup failures by aligning Docker's cgroup driver with Kubernetes systemd requirements through /etc/docker/daemon.json configuration.
- Docker CLI Tools Containerization — Pattern for packaging command-line utilities like Apache Bench as Docker containers, enabling isolated, portable execution of tools without local installation.
- Docker commit operation — The process of creating a new image from a container's changes using 'docker commit', which captures the container's filesystem state as a new image.
- Docker Compose — A Docker tool for defining and running multi-container applications using YAML configuration files, allowing multiple images to be launched and managed together as a service stack.
- Docker Compose installation methods — Multiple installation approaches for Docker Compose including package manager installation via yum with Python pip, and direct binary download via curl from GitHub releases.
- Docker Compose Redis configuration — Using docker-compose.yml to configure and run Redis as a containerized service with port mapping
- Docker container images — Deep understanding of how container images are structured, layered, and used for application deployment
- Docker container isolation mechanisms — Technical methods by which Docker containers achieve resource isolation, including namespace-based isolation and comparison with virtual machine approaches
- Docker container lifecycle management — Commands and procedures for managing containers throughout their lifecycle: creating, starting, stopping, removing, and troubleshooting container conflicts with image removal.
- Docker container linking for registry — Using Docker's --link flag to connect the registry-web container to the registry container for inter-container communication via Docker's embedded DNS.
- Docker container monitoring and inspection — Commands for observing container behavior and internals: viewing logs, monitoring running processes, inspecting container metadata, and executing commands inside running containers.
- Docker container networking for microservices — Connecting Docker containers across different services using custom networks with docker run --net flag to enable inter-container communication.
- Docker container networking modes — Four network types for Docker containers: Bridge (NAT, default), None (no networking), Host (shares host network stack), and Container (joins another container's network namespace).
- Docker container persistence — The practice of exporting Docker containers to tar archives for backup or migration using docker export and corresponding import commands.
- Docker Container Technology — Container platform providing lightweight application packaging with image and container management, networking modes, volume mounting, and integration with development workflows and orchestration platforms.
- Docker core concepts — Three fundamental Docker building blocks: images (templates), containers (runtime instances), and registries (storage/distribution repositories), enabling containerized application packaging.
- Docker daemon configuration — daemon.json configuration file settings controlling Docker runtime behavior including storage driver, working directory (graph), registry mirrors, insecure registries, bridge IP (bip), and cgroup driver options.
- Docker data mounting patterns — Best practices for mounting configuration, secrets, and data files to separate directories in containers to avoid overwriting application files
- Docker Desktop data directory migration — The process of changing Docker's default storage location on Windows, which involves configuration changes to move the vm-data directory to an alternate drive or path.
- Docker Desktop for Mac Kubernetes — Docker Desktop includes a built-in single-node Kubernetes cluster that can be enabled through settings, providing a convenient local development environment for learning and testing Kubernetes operations.
- Docker Desktop for Mac Kubernetes Installation — Step-by-step guide to enabling Kubernetes in Docker Desktop on macOS, including downloading the application, enabling Kubernetes through settings, and verifying the cluster status.
- Docker Desktop Kubernetes integration — Local Kubernetes environment provided by Docker Desktop for development and testing, requiring specific ingress controller configurations compatible with its networking stack
- Docker Desktop Kubernetes port conflict — Startup failure in Docker Desktop's Kubernetes integration due to port 6443 being occupied by Windows services, particularly the Windows NAT (WinNAT) service
- Docker Desktop log debugging — Technique for diagnosing Docker Desktop startup failures by monitoring the backend log file at /c/Users/{username}/AppData/Local/Docker/log.txt
- Docker Desktop 內建 Kubernetes — Docker Desktop 的內建 Kubernetes 功能,讓開發者可以在本機輕易架設 Kubernetes 集群進行開發與測試,無需額外安裝其他工具。
- Docker development workflow — An iterative development approach using Docker to build and run containers with volume mounts, enabling code changes to be reflected immediately without rebuilding images.
- Docker fixed CIDR IP allocation — Using the --fixed-cidr Docker option to restrict and control IP address ranges assigned to containers, preventing conflicts across multiple Docker hosts sharing a network segment.
- Docker fundamentals reference — Core Docker documentation covering containerization basics, commands, and essential operations for the Docker platform.
- Docker image and container relationship — The dependency between images and containers where containers are runtime instances created from images, and images cannot be removed if containers referencing them exist.
- Docker image build and push workflow — Multi-step process for building container images locally, pushing them to a Docker registry, and updating Kubernetes YAML manifests with the new image tags.
- Docker image management — Complete workflow for managing Docker images including pulling from registries, tagging with versions, pushing to remote repositories, and deleting with docker rmi.
- Docker image operations — Working with Docker images including listing, tagging, pushing to registries, pulling from registries, and removing images with dependency checks.
- Docker image persistence — The practice of saving Docker images to tar archives for backup or transfer using docker save and docker load commands.
- Docker image tagging and naming — The convention for naming Docker images with registry hostname, username, and image name (e.g., quay.io/username/image), enabling proper routing and organization.
- Docker image tagging and pushing — Workflow for preparing and sharing images via docker tag to add repository prefix and docker push to upload to Docker Hub registry
- Docker image tagging and pushing workflow — The process of building, tagging, and pushing Docker images to registries including Docker Hub and private registries, using docker build, docker tag, and docker push commands.
- Docker image vs container persistence — Key differences between docker save (for images) and docker export (for containers), including their respective load commands and use cases.
- Docker in DEVPOS context — Integration of Docker containerization technology within the DEVPOS framework, represented as a navigation category and component of the broader system.
- Docker installation and configuration — The process of installing Docker on CentOS systems including prerequisites (kernel 3.8+, SELinux disabled, firewalld stopped), yum repository setup, and daemon.json configuration for storage drivers, registry mirrors, insecure registries, and cgroup drivers.
- Docker installation on Windows Subsystem for Linux — Procedure for installing Docker within WSL (Windows Subsystem for Linux) environments using shell commands and service management.
- Docker installation script (get-docker.com) — Official Docker installation method using a shell script downloaded from get.docker.com that automates Docker setup on Linux systems.
- Docker JRE基础镜像构建 — Creating a minimal JRE8 base Docker image with Prometheus JMX monitoring agent, timezone configuration, and custom entrypoint script for Java application deployment in Kubernetes
- Docker label configuration — Declarative configuration method using Docker container labels to specify Traefik routing rules, network attachments, ports, and protocols without modifying Traefik's main configuration.
- Docker multi-stage build for Go CLI tools — Building Go command-line applications using Docker multi-stage builds with separate dev, build, and runtime stages, creating minimal Alpine-based runtime images for the compiled binary.
- Docker multi-stage build for Python — Dockerfile pattern with separate 'dev' stage for development environment and 'runtime' stage for production, copying source files and setting ENTRYPOINT for the final container
- Docker multi-stage builds — A Docker build technique using multiple FROM statements to create intermediate build stages, separating compilation dependencies from runtime environments for optimized image sizes.
- Docker multi-stage builds for Go — Dockerfile pattern using separate stages (dev, build, runtime) to compile Go applications and create minimal Alpine-based runtime containers.
- Docker multi-stage builds for Go applications — A Docker build technique using separate stages for development, compilation, and runtime to create minimal production images containing only the compiled Go binary and necessary assets.
- Docker multi-stage builds with Flask — Containerizing Flask applications using multi-stage Dockerfiles with separate development and runtime stages, dependency installation, and port exposure
- Docker multistage builds for Python — Docker optimization technique using multiple build stages (dev, debugging, runtime) to create isolated development environments and minimal production images with only Python runtime and application code.
- Docker MySQL deployment — Database deployment patterns using Docker containers to run MySQL, covering containerization strategies for relational database services.
- Docker MySQL master-slave replication — A database architecture pattern where MySQL database instances are configured in primary-secondary replication mode using Docker containers for fault tolerance and read scaling.
- Docker network bridge configuration — Configuration of Docker containers to use custom Linux bridges instead of the default docker0 bridge for multi-host container networking across physical servers.
- Docker network pruning — The
docker network prunecommand for cleaning up unused Docker networks, helping maintain a clean container networking environment - Docker port forwarding setup — Docker Desktop backend mechanism for exposing container ports to the host system via TCP port bindings (e.g., 127.0.0.1:6443)
- Docker private registry setup — Deploying a private Docker registry server using the registry:2 image with volume mounting for persistent storage and port mapping for access.
- Docker Registry — A storage and content delivery system for named Docker images, supporting both public repositories like Docker Hub and private self-hosted alternatives.
- Docker registry and authentication — Using Docker Hub and other registries to store and distribute images, including login authentication, tagging images for upload, and pushing images to repositories.
- Docker Registry API v2 — HTTP API interface for querying Docker registries, including endpoints like /v2/_catalog to list repositories and /v2/{name}/tags/list to list available tags for an image.
- Docker registry authentication — Using Docker Hub or private registries for storing and sharing images, including docker login authentication, credential storage in ~/.docker/config.json, and push/pull workflows for remote image repositories.
- Docker Registry v2 API — HTTP API endpoints for querying Docker registry contents, including /v2/_catalog for listing repositories and /v2/{name}/tags/list for listing image tags.
- Docker registry workflow — The standard process for pushing Docker images to a registry, involving login, container creation, commit, and push operations with authentication.
- Docker resource cleanup workflow — A comprehensive Docker cleanup routine that combines container, image, volume, and network removal commands for complete system maintenance.
- Docker service management — Basic Docker daemon lifecycle operations including starting and stopping the Docker service using sudo service commands.
- Docker versus virtual machine comparison — A comparison between Docker containers and traditional virtual machines, likely covering architectural differences, resource isolation mechanisms, and use case scenarios.
- Docker volume mounting and port mapping — Container runtime techniques for host-container integration: -v flag mounts host directories into containers (data persistence), -p flag maps host ports to container ports (service exposure), enabling interactive and containerized applications.
- Docker volume mounting for registry persistence — Using the -v flag to mount host directories (C:/docker.resistry) into container paths (/var/lib/registry) for persistent storage of registry data.
- Docker volume pruning — The
docker volume prunecommand for removing all unused volumes from the system, a maintenance operation to reclaim disk space - Docker vs Virtual Machines Comparison — The architectural and operational differences between containerization (Docker) and traditional virtual machine technologies, covering resource isolation, performance, and use cases.
- Docker Windows default image storage location — Docker Desktop for Windows stores container images by default in the directory C:\ProgramData\DockerDesktop\vm-data, which can be relocated if needed.
- Docker Windows Hyper-V conflict — Docker Desktop for Windows cannot run concurrently with VMware Workstation because both rely on virtualization technologies, with Hyper-V being the underlying conflict.
- Docker Windows version compatibility issues — Certain versions of Docker for Windows contain known bugs that prevent normal operation, requiring users to be selective about which version they install.
- Docker-based Python Development Workflow — Containerized development and runtime environment setup for Python Flask applications using multi-stage Dockerfiles with Alpine Linux
- docker-compose-environment-variables-for-mysql — Configuration method using environment variables (MYSQL_USER, MYSQL_PASSWORD, MYSQL_DATABASE, MYSQL_ROOT_PASSWORD) to initialize MySQL container settings and user credentials.
- docker-compose-mysql-configuration — Complete Docker Compose service definition for deploying MySQL 5.7 with port mapping, persistent volume storage, environment-based configuration, and auto-restart policy.
- docker-compose-mysql-service-configuration — A Docker Compose service definition for running MySQL 5.7 with port mapping (33060:3306), persistent volume mounting, restart policy, and environment-based credential configuration.
- docker-compose-port-mapping-syntax — Port forwarding configuration in Docker Compose using 'HOST:CONTAINER' format (e.g., '33060:3306') to expose container ports on specific host interfaces.
- docker-compose-restart-policy — The 'restart: always' directive in Docker Compose that configures containers to automatically restart regardless of exit status, ensuring service availability.
- docker-compose-restart-policy-always — The restart policy configuration (restart: always) that ensures containers automatically restart if they stop, fail, or if the Docker daemon restarts, providing self-healing behavior for services.
- docker-compose-service-definition-structure — The structural components of a docker-compose.yml file including version declaration, services block, service identifiers, image specification, port mappings, volume mounts, restart policies, and environment variables.
- docker-compose-version-3-syntax — Docker Compose file format version '3' which defines the structure and available features for multi-container Docker application orchestration.
- docker-compose-volume-mounting-for-database-persistence — Using Docker Compose volume mapping (- ./data:/var/lib/mysql) to persist MySQL data files on the host filesystem, ensuring data survives container lifecycle events.
- docker-compose.yml configuration — A YAML file format for defining Docker Compose services, including version specification, service definitions with container images, restart policies, environment variables, and port mappings.
- docker-container-automation-with-terraform — Using Terraform to programmatically create and manage Docker containers as infrastructure, treating containerized applications as declarative resources rather than manually running Docker commands
- docker-container-networking-for-go-and-redis — Connecting Go application containers to Redis containers requires Docker networking (--net redis flag) so containers can communicate using container names as hostnames (e.g., sentinel-0:5000).
- docker-environment-variable-configuration — Passing configuration to containers at runtime using -e FLAG=value syntax for environment variables, enabling application code to read settings via os.Getenv() without hardcoding credentials.
- docker-from-docker-compose-networking — A Docker Compose networking pattern where containers (like FluentD) can communicate with the Docker daemon socket mounted from the host to access container metadata and logs.
- Docker-in-Docker (DinD) — A problematic pattern where Docker runs inside a Docker container, often causing security and complexity issues that tools like kaniko aim to solve.
- Docker-outside-of-Docker development pattern — Mounting the host machine's Docker socket (/var/run/docker.sock) into a Docker container to enable the container to communicate with the parent Docker daemon, allowing containerized tools to manage sibling containers.
- docker-port-mapping-syntax — Port mapping configuration in docker-compose.yml using the HOST:CONTAINER format (e.g., "33060:3306") to expose container ports on the host system for external access.
- docker-registry Secret — A specialized Kubernetes Secret type that stores Docker registry credentials, enabling Pods to automatically authenticate and pull images from private container registries without manual login.
- docker-registry-web UI — A web-based user interface (hyper/docker-registry-web) for browsing and managing Docker registry contents, linking to the registry container for API access.
- docker-volume-mounting-for-data-persistence — Using Docker Compose volume mapping syntax (./data:/var/lib/mysql) to mount host directories into container paths, enabling data persistence across container lifecycle events.
- Dockerfile for SSH container setup — A complete Dockerfile configuration that creates a containerized SSH server environment using Java 8 as the base image, including OpenSSH server installation, PAM configuration modification, and SSH key-based authentication setup.
- Dockerfile for Tomcat with JDK — Creating a custom Docker image combining CentOS 7, JDK 8u91, and Apache Tomcat 8.5.35 using a Dockerfile with environment configuration and port exposure.
- Dockerfile for Tomcat/JDK — A Dockerfile configuration that builds a custom Tomcat 8.5.35 server image with JDK 8u91 on CentOS 7, including environment variable setup and port exposure.
- Dockerfile instruction sets — Four groups of Dockerfile instructions for image building: USER/WORKDIR (execution context), ADD/EXPOSE (files and networking), RUN/ENV (configuration), and CMD/ENTRYPOINT (startup commands).
- Dockerfile instruction syntax — Core Dockerfile directives including FROM (base image), LABEL (metadata), RUN (build-time commands), ADD (file copying with auto-extraction), ENV (environment variables), CMD (default command), and EXPOSE (port declarations).
- Dockerfile 構建流程 — Using Dockerfiles to convert application source code into Docker container images as an intermediate step in the Kubernetes deployment workflow.
- Dockerfile-free container building — An approach to container image construction that uses build tools to detect project configuration and dependencies rather than requiring manually written Dockerfiles.
- Dockerfile-free containerization — Approach to building container images directly from source code without Dockerfiles, using tools like Pack for simplified container creation workflows
- dockershim — Legacy Kubernetes component that translates CRI requests into Docker API calls, acting as an adapter between Kubernetes and Docker daemon. Part of the GenericRuntime component that bridges kubelet's CRI requests to container runtime implementations.
- Document Merge Pattern — A technique for combining multiple documents of the same format (e.g., CSV files) by merging headers from the first document with content from subsequent documents, used to aggregate generated report files.
- Documentation alias system — A naming convention using aliases to create alternative references or shortcuts to documentation pages, improving discoverability and navigation.
- Documentation stub template — A minimal document structure containing metadata headers (title, author, tags, categories, date, TOC) with empty content sections
- Documentation template structure — A standardized documentation framework using metadata headers, tag systems, aliases, creation/update timestamps, and predefined section scaffolds to establish consistent knowledge base entries
- Domain service pattern implementation — Separating business logic into interface-implementation pairs (ReportManageDomainService) that orchestrate between controllers and external service clients
- Draft workflow — Content creation process in Hexo where unfinished work is stored as drafts and later published to posts, requiring a special server flag to preview.
- Drone build limit workaround — Techniques to bypass the 5000 build restriction in Drone's open-source edition by compiling from source with specific build tags like 'nolimit' and 'oss nolimit'.
- Drone CI — A container-based continuous integration and delivery platform with build limitations in the standard version that can be removed through source compilation with specific build tags.
- Drone CI/CD server — A self-hosted continuous integration and delivery platform that can run locally with custom domain configuration through hosts file mapping.
- Drone server configuration — Environment variables and settings for deploying Drone CI server, including SQLite database configuration, runner OS/architecture settings, server port/host, and Datadog telemetry endpoints.
- Dubbo Microservices CI/CD Pipeline — End-to-end continuous integration and delivery workflow for Dubbo microservices using Jenkins, Maven, and K8S, including base image creation, automated building, deployment, and cluster maintenance.
- Dubbo service Apollo integration pattern — Connecting Dubbo microservices to Apollo configuration center through JVM parameters (-Dapollo.meta, -Denv) and application.properties configuration, enabling runtime configuration updates without container image rebuilds or pod restarts.
- Dubbo微服务交付架构 — 阿里开源的高性能RPC框架Dubbo与Kubernetes结合的完整CI/CD交付架构,包含Zookeeper注册中心、Jenkins构建、Harbor镜像仓库、Ingress路由等组件,实现从代码提交到服务部署的自动化流程。
- DuckDB WASM browser storage — Client-side database using DuckDB WebAssembly for local data persistence and AI memory systems in web applications
- Dummy AWS Credentials Configuration — LocalStack-specific AWS configuration pattern where credentials are configured with dummy values since validation is disabled and only the region setting matters
- Dynamic ApplicationContext hierarchy construction — Spring contexts can be added dynamically after application startup by creating new AnnotationConfigApplicationContext instances, registering configuration classes, setting parent references, and calling refresh().
- Dynamic authorization with Spring Security @PreAuthorize — Using Spring's @PreAuthorize annotation with custom validator beans to evaluate user permissions at method invocation time based on runtime parameters
- Dynamic Bean Registration in Spring — Techniques for programmatically registering and unregistering Spring beans at runtime using BeanDefinitionRegistry, BeanDefinitionBuilder, and DefaultListableBeanFactory
- Dynamic context addition and refresh — Pattern of adding new child contexts to an already-running parent context by registering beans, setting parent relationship, and calling refresh() to initialize the new context independently.
- Dynamic DNS (DDNS) update mechanism — Protocol for automatically updating DNS records in real-time when IP addresses change, commonly used with services like Cloudflare API to maintain domain-to-IP mappings for dynamic residential or business connections.
- Dynamic library updating security considerations — Security best practice warning against enabling pods to dynamically update libraries, with recommendations to limit traffic to necessary service dependencies only
- Dynamic Provisioning — Kubernetes mechanism for automatically creating PersistentVolumes on-demand based on PersistentVolumeClaim requests using StorageClass templates and storage plugins, eliminating manual PV management
- Dynamic volume provisioning — Automatic creation of PVs based on StorageClass when PVCs request storage, as opposed to static provisioning where administrators pre-create PVs before PVC claims.
- East-West Gateway — A dedicated Istio gateway deployment specifically for handling inter-cluster and cross-network traffic, separate from the north-south ingress gateway that handles external user traffic.
- EasyExcel — An open-source Java library developed by Alibaba for simplified Excel file read/write operations using annotations, designed to reduce memory overhead and simplify Excel processing compared to traditional Apache POI.
- Eclipse dropins directory — A special directory in Eclipse installation that allows manual plugin installation by copying plugin files directly into specific subdirectories without requiring the Eclipse Marketplace or installation wizards.
- Eclipse plugin directory structure — The standard organizational format of Eclipse plugins containing two mandatory subdirectories: features/ (for feature definitions) and plugins/ (for actual plugin code), which must be preserved when relocating plugins.
- Eclipse plugin extraction — A technique for extracting installed Eclipse plugins from the features and plugins directories and relocating them to the dropins folder for portable, manually managed plugin installation.
- Eclipse plugin extraction workflow — A manual process for extracting Eclipse plugins installed via Marketplace by locating the plugin files in the Eclipse directory structure, copying them, and relocating them to the dropins directory for portable, manually managed plugin installation.
- Editor color scheme customization — Visual styling configuration for syntax highlighting elements including foreground/background colors, font styles, and color mapping for different token types like keywords, comments, and operators.
- editor.mouseWheelZoom setting — The boolean VSCode configuration setting that must be enabled to allow mouse wheel zooming functionality
- EFK stack — A logging architecture composed of Elasticsearch (storage and search), Fluent Bit/Fluentd (log collection and forwarding), and Kibana (visualization), commonly used for Kubernetes cluster log management.
- EHLO vs HELO commands — HELO initiates standard SMTP protocol handshake while EHLO initiates Extended SMTP (ESMTP) handshake, with EHLO additionally returning server capabilities like STARTTLS support
- EJB remote service lookup pattern — The standard Java EE pattern for looking up remote EJB interfaces using JNDI with fully qualified names in the format 'ejb-path#interface-full-classname', demonstrated with bpm/ejb/WorkflowEngineService#core.bpm.service.workflow.engine.WorkflowEngineServiceRemote.
- Elasticsearch — A Java-based distributed search and analytics engine built on Lucene, designed for high-performance, scalable document storage with full-text search, aggregation, and structured/unstructured data processing capabilities.
- Elasticsearch DSL (Domain Specific Language) — Elasticsearch's JSON-based query language used as an alternative to SQL, allowing complex queries and aggregations to be executed via REST API or tools like Kibana Dev Tools.
- Elasticsearch field types — Field type specifications in Elasticsearch mappings, including
keywordtype for exact match queries without tokenization andtexttype for full-text search with tokenization and inverted indexing. - Elasticsearch Integration for Performance Data — Storing frontend performance monitoring data in Elasticsearch using JSON format, optionally with middleware layers, enabling visualization and statistical analysis through Kibana dashboards.
- Elasticsearch Search and Analytics — Distributed search and analytics engine built on Lucene providing full-text search with inverted indexes, JSON document storage, aggregation capabilities, and REST API for log analytics, monitoring, and search applications.
- Elasticsearch single-node deployment — Binary installation process including user creation (es), file descriptor limits (65536), memory locking settings, kernel parameter tuning (vm.max_map_count=262144), and index template configuration for k8s log patterns.
- ELK stack architecture — Log aggregation pipeline consisting of Filebeat (log collection), Kafka (message buffering), Logstash (processing/filtering), Elasticsearch (storage/search), and Kibana (visualization), deployed as sidecar containers with applications.
- ELK Stack architecture for Kubernetes — Centralized logging architecture combining FileBeat for log collection from containers, Kafka as message broker, Logstash for log processing, Elasticsearch for storage and indexing, and Kibana for visualization and analysis
- Emotional barriers in skill acquisition — The psychological discomfort and feeling of incompetence that naturally occurs during early skill learning, which must be recognized as normal and overcome rather than allowing it to cause abandonment of the learning process.
- EmptyDir Volume — An empty directory created when a Pod is started, shared by all containers in that Pod. Useful for temporary caching and storage, but deleted when the Pod is removed. Can be populated from gitRepo during Pod initialization.
- EnableLUA (User Account Control) — A Windows registry setting that controls User Account Control behavior; setting it to 0 disables UAC and can resolve issues like drag-and-drop file execution in Windows 11
- Enterprise Service Bus (ESB) — Centralized SOA implementation pattern solving heterogeneous system connectivity through protocol conversion, message parsing, routing, and common logic aggregation, characterized by its heavyweight nature and central messaging infrastructure.
- EntityManager manual injection pattern — Technique for manually injecting EntityManager into repository beans using reflection when Spring's automatic injection is not available or sufficient
- Enumeration-based authority mapping — Mapping business domain types to permission strings through enum constants to centralize and type-check authorization requirements
- Environment merging in Spring contexts — Automatic configuration inheritance mechanism where child contexts merge their parent's ConfigurableEnvironment settings, enabling hierarchical property and configuration management.
- Environment-based Configuration Pattern — Reading application configuration from environment variables for containerized deployments, particularly for sensitive data like database credentials
- Environment-separated namespace strategy — Kubernetes organizational pattern using dedicated namespaces (test, prod) to isolate application deployments, with separate database instances, configuration, and DNS entries for each environment while using identical container images.
- Environment-specific Maven builds — Build approach for packaging Java applications with different configurations based on target environments (development, testing, production)
- Envoy Bootstrap Configuration — The foundational configuration file that controls how the Envoy proxy initializes and operates, including parameters for static resources, dynamic configuration, listeners, and administrative interfaces.
- Envoy Configuration Merging — The process by which custom Envoy configuration is combined with default configuration, where singular values override defaults and repeated values are appended to existing collections.
- Envoy ext_authz filter — Envoy proxy's external authorization filter that delegates authorization decisions to an external service, enabling custom authorization logic separate from the proxy configuration.
- Envoy SDS API integration — Secret Discovery Service API that enables dynamic distribution of TLS certificates and secrets to Envoy proxies, allowing SPIRE to act as an external CA for Istio workloads.
- ephemeral developer environment — A temporary, on-demand development workspace that can be created and destroyed as needed, typically configured via files like .gitpod.yml and Dockerfiles for environment specification.
- Ephemeral Volumes — Volumes with lifecycle tied to the Pod - created when the Pod starts and destroyed when the Pod is deleted. EmptyDir is the primary example. Used for temporary storage and caching scenarios.
- Epic pages (總目錄) — Top-level index or table of contents pages that provide reference access points to clusters of connected Zettelkasten notes.
- Epoll I/O model — An efficient I/O event notification facility for Linux that scales better than select for large numbers of file descriptors.
- ER/Studio reverse engineering — Database reverse engineering process using ER/Studio 8 to extract and visualize existing MySQL database structures through ODBC connections.
- ES module type attribute — The HTML script tag attribute type="module" that enables browsers to treat JavaScript files as ES6 modules with import/export functionality and strict mode scope
- etcd in Kubernetes — A distributed key-value store used by Kubernetes to persist cluster state and configuration data, enabling rapid restoration after crashes and maintaining the single source of truth for the cluster.
- Event broker implementation pattern — The practice of building event-broker middleware from scratch to handle event routing and communication between microservices, as demonstrated in the blog project implementing a custom event broker.
- EventExecutorGroup for blocking handlers — A technique to offload time-consuming or blocking handler logic to a separate thread pool, preventing pipeline bottlenecks by using DefaultEventExecutorGroup with specified thread count.
- EventLoop Blocking Prevention — Time-consuming tasks should not be executed within EventLoop threads as they will block I/O operations; instead, business thread pools should be used via custom thread pools in ChannelHandler callbacks or pipeline configuration.
- Evolutionary note preservation — The principle of never deleting old notes; instead, creating new notes that link to and supersede previous thinking while documenting what was inadequate.
- Excel annotation mapping — The use of Java annotations to define mappings between object fields and Excel columns, enabling model-based serialization and deserialization of spreadsheet data.
- ExcelReader pattern — A streaming approach to reading Excel files using an event listener (AnalysisEventListener) that processes data row-by-row rather than loading entire files into memory.
- ExcelWriter pattern — A programmatic approach to writing Excel files that supports multiple sheets, model mapping with annotations, and structured data export through a fluent API.
- Executor Interface Implementations — Three practical Java Executor implementation patterns: DirectExecutor (synchronous execution), ThreadPerTaskExecutor (spawns new thread per task), and SerialExecutor (serializes task execution through a queue).
- Express vs Koa comparison — Comparison between Express.js (the established Node.js web framework) and Koa (a modern, lighter alternative) to help developers choose between them.
- expression DNA modeling — Capturing the distinctive communication style of experts—including tone, sentence rhythm, vocabulary preferences, and rhetorical patterns—to generate AI responses that feel authentic to the source's voice while applying their cognitive frameworks.
- Ext Authz Service — An external authorization server implementation that integrates with Envoy's ext_authz filter to provide custom authorization logic for Istio service mesh, supporting both HTTP and gRPC protocols.
- External Service Integration Pattern — The technique of connecting Kubernetes workloads to services running outside the cluster by creating a Service without a selector and manually defining Endpoints with external IP addresses
- external-workflow-triggering — The practice of invoking GitHub Actions workflows from outside the GitHub platform using repository dispatch events, enabling integration with external tools and services.
- False Positive Rate — Configurable probability parameter in Bloom filters (e.g., 0.000001) representing the acceptable rate of incorrect "might contain" results, trading off accuracy against memory usage.
- FanoutExchange Listener Pattern — Message routing pattern where a single message triggers multiple parallel listeners, each querying different database tables (e.g., vs_user_tag_relation, vs_withdraw, vs_payment) and storing results in shared Redis storage
- FastAPI + WebSocket file watching pattern — Real-time data synchronization architecture using watchfiles to monitor directory changes, clearing related caches and broadcasting updates through WebSocket connections to trigger frontend SWR revalidation
- feature-ticket-system — A project management and development workflow practice where features are tracked using numbered tickets with descriptive hierarchies (e.g., Feature #29513: 代理管理/代理列表/代理域名/導出/導出架構設計).
- Feign client for microservice communication — Declarative REST client pattern using Spring Cloud OpenFeign with custom interceptors and configuration for service-to-service communication
- Fiddler Classic — A web debugging proxy tool for Windows that captures HTTP/HTTPS traffic between computers and the internet, commonly used for packet inspection and mobile debugging.
- Fiddler Configuration — Specific settings and options within Fiddler Classic for customizing capture behavior, including HTTPS decryption settings, proxy rules, and certificate management.
- File existence checking with os.path — Using Python's os.path module and isfile() function to verify file existence before attempting file operations, preventing FileNotFoundError exceptions in robust file handling code.
- File I/O with Go's ioutil package — The io/ioutil standard library package provides ReadFile and WriteFile functions for reading and writing file contents as byte slices, commonly used for loading configuration and data files.
- File watching pattern — Build automation technique that monitors specified file patterns for changes and triggers predefined tasks, enabling continuous integration workflows.
- Filebeat sidecar pattern — Log collection deployment where Filebeat container runs alongside application container in the same Pod, sharing volume mounts for log directories (/logm, /logu) and shipping logs to Kafka topics with environment-based naming (k8s-fb-$ENV-%{[topic]}).
- FileDownloadRecordEntity Schema — Database entity schema for tracking file download records, including fields for report source, enumeration, search conditions (MD5), file location, department/admin IDs, processing status, and completion timestamps
- FileStateRegistry — Process-level singleton registry that prevents file write conflicts in parallel sub-agent workflows through per-path threading locks and staleness detection (sibling overwrites, external modifications, write-without-read warnings)—returning guidance to models instead of blocking execution.
- First 20 Hours Learning Method — A rapid skill acquisition framework positing that focused practice for 20 hours is sufficient to achieve competence in most skills, through deconstruction, barrier removal, and deliberate practice.
- First-time open source contribution — The process of making an initial contribution to open source projects, typically involving understanding project workflows, setting up development environments, and submitting pull requests for the first time
- FlagSet for subcommand isolation — Creating separate flag.FlagSet instances for each CLI subcommand to enable independent flag definitions and parsing.
- Flannel host-gw backend — Flannel networking mode that uses host gateway routing with direct IP forwarding between nodes without encapsulation, achieving better performance (~10% overhead vs 20-30% for tunnel-based solutions) but requiring L2 connectivity
- Flannel overlay networking — CNI network plugin that creates virtual overlay networks for container communication across Kubernetes cluster nodes using subnet allocation and routing
- Flannel pod network CIDR alignment — The critical requirement to match the pod-network-cidr parameter used during kubeadm init with the Network setting in kube-flannel.yml for proper cluster networking.
- Flannel VXLAN backend — Flannel network plugin implementation using VXLAN (Virtual Extensible LAN) with VTEP tunnel endpoints and VNI identifiers to encapsulate L2 traffic over L3 networks, solving traditional data center network limitations
- Flash sale mechanics — Technical implementation patterns for time-limited promotional sales, involving inventory management, concurrent request handling, countdown timers, and instant stock depletion prevention systems.
- Flask request handling — Processing incoming HTTP request data including reading JSON payloads from request bodies and content-type headers
- Flask REST API Data Storage Migration — Pattern for migrating Flask web application data storage from JSON file-based persistence to Redis database
- Flask routing and HTTP methods — URL endpoint mapping using decorators with HTTP method constraints (GET, POST) to handle different types of web requests
- flask-demo — A minimal test project demonstrating basic Flask web application structure with placeholder content
- Flat folder structure — A minimalist file organization approach that reduces reliance on deep folder hierarchies in favor of link-based navigation and search-based discovery.
- Fleeting notes — Temporary, quick-capture notes used in Zettelkasten methodology for recording immediate inspirations and ideas that require later processing into permanent notes
- Fluent Bit on Kubernetes — Deployment and configuration of Fluent Bit as a log collector/forwarder within Kubernetes clusters, typically installed via DaemonSet for node-level log collection from container stdout/stderr.
- Fluentd log streaming with Docker Compose — Using docker-compose logs -f fluentd to follow real-time log output from Fluentd containers running in Docker Compose.
- Fluentd vs Fluent Bit — Comparison between two log collection tools from the Fluent ecosystem, where Fluent Bit is the lightweight, high-performance forwarder and Fluentd offers more extensive plugin support and flexibility.
- fluentd-container-log-monitoring — Monitoring FluentD container output using docker-compose logs to observe log collection behavior, identify issues with log forwarding, and verify data flow through the logging pipeline.
- fluentd-logs-viewing-command — The docker-compose logs -f fluentd command streams FluentD container logs in real-time for monitoring and debugging log collection pipelines.
- Fork synchronization workflow — A Git workflow process for keeping a forked repository updated with changes from the original upstream repository using remote configuration and pull commands.
- Free cloud storage services — Cloud storage providers offering no-cost storage tiers, typically with account-based access and capacity limitations, for personal file backup and sharing needs.
- Free domain and DNS management for development — Process of acquiring and configuring free domain names with DNS services, essential for exposing local development environments and testing webhooks.
- Free domain resource stack — A complete zero-cost website setup workflow combining Freenom for free domain registration, Cloudflare for DNS management, and SSL For Free for HTTPS certificates.
- Free domain services — No-cost domain name providers such as Freenom offering free TLDs like .ml, .cf, .gq, .tk, and .ga, often with automatic renewal requirements
- Freenom — A domain registration service that offers free top-level domains (TLDs) including .ml, .cf, .gq, and .tk, requiring Firefox browser and Gmail login for registration.
- Freenom domain management — Automated renewal system for Freenom free domains using Docker containers to maintain domain registration without manual intervention
- Front matter metadata structure — YAML-style document header configuration including title, author, tags, categories, table of contents toggle, and timestamp
- Frontend Performance Monitoring — Using browser APIs like window.performance and Navigation Timing API to collect metrics on web page load performance, network latency, and rendering times.
- FTS5 Cross-Session Search — Memory retrieval system using SQLite FTS5 full-text search to index and query conversation history across sessions, enabling context-aware responses
- Function-based DAO template extraction — A refactoring pattern that extracts repetitive connection management and resource handling logic from DAO/Service layers into reusable function-based templates
- Function
interface — A built-in Java functional interface representing a unary function that accepts one argument of type T and produces a result of type R, primarily using the apply() method. - Functional transaction management pattern — A pattern using functional interfaces to handle database transactions with automatic commit/rollback and connection cleanup, reducing boilerplate in service layer code
- GBrain AI Agent Knowledge System — YC President Garry Tan's open-source personal knowledge management system for AI Agents, using Postgres + pgvector hybrid retrieval with 25 integrated skills for automatic knowledge ingestion, enrichment, and continuous learning through read-write cycles.
- gcloud auth application-default login — Authentication command that creates Application Default Credentials (ADC) for Google client libraries and SDK tools, with support for quota project assignment
- gcloud components management — Commands to install ('gcloud components install') and update ('gcloud components update') Cloud SDK components like kubectl, with network diagnostics for troubleshooting SSL errors
- gcloud init — Command to configure Google Cloud SDK settings including authentication, project selection, and default compute region/zone configuration
- GCP deployment toolchain — A technology stack combining Docker for containerization, Kubernetes for orchestration, Terraform for infrastructure provisioning, and GitHub Actions for CI/CD automation when deploying to Google Cloud Platform.
- Gemma 4 26B model — Google's open-source 26B parameter local AI model capable of running offline on single 24GB GPU with 245K token context window
- gen-helloworld.sh script — A flexible script for generating YAML for the helloworld service with customizable options for version, service inclusion, and deployment inclusion, enabling deployment of custom versions.
- Generic template methods for read/write operations — Type-safe, reusable connection handling templates (readOptional, writeOptional) that abstract database connection lifecycle, transaction management, and resource cleanup
- generic-methods — Java methods that declare their own type parameters independent of the class's generic type, enabling methods to operate on different types than the class-level generics.
- generic-methods-java — Java methods that declare their own type parameters independent of the class's generic type, enabling methods to operate on different types than the class-level generics.
- generic-type-independence — The principle where generic methods can define type parameters separate from and independent of the class's generic type declarations, enabling flexible type handling at method level.
- generic-types-in-java — Three categories of generics in Java: generic classes, generic interfaces, and generic methods, each enabling type-safe programming patterns with parameterized types.
- Genymotion — Android emulator designed for app testing and development, providing a virtual Android environment on desktop computers
- Gist Code Embedding — A technique for importing and displaying code snippets in blogs using GitHub Gist scripts, enabling code sharing and presentation through embeddable widgets.
- git am (apply mailbox) — A Git command that applies patches generated by format-patch while preserving original commit metadata including author information, used with the -s flag for sign-off functionality.
- Git Bash prompt configuration for Windows Terminal — Bash profile configuration using PROMPT_COMMAND with wslpath -w to convert Unix paths to Windows paths and emit ANSI escape codes for directory tracking in Windows Terminal.
- Git branch tracking and remote management — Commands for establishing and managing tracking relationships between local and remote branches, including setting upstream branches with --set-upstream-to, --set-upstream, and --track options.
- Git cherry-pick — A Git operation that applies changes from a specific commit to the current branch, mentioned as a related technique to patch workflow
- Git clean for removing untracked files — Using git clean with -f and -d flags to remove untracked files and directories from the working directory, useful for returning to a clean state.
- Git commit amendment techniques — Methods for modifying the most recent commit using --amend, including changing commit messages, adding forgotten files, or canceling commits entirely with soft and hard resets.
- Git force operations and their risks — Force push and force reset commands that overwrite remote history, including warnings about potential data loss when overwriting others' work and how to reset to remote branches.
- git format-patch — Git command that generates email-formatted patch files from commits, supporting ranges between commits, single commits, or commit history from a specific point
- Git Hooks自动重建 — 通过post-commit和post-checkout钩子自动触发知识图谱重建,确保代码库变更时图谱保持同步
- Git patch workflow — A Git technique for extracting changes as portable patch files using format-patch, then applying them to other repositories with git apply or git am
- Git rebase — A Git operation that replays commits from one branch onto another, creating a linear history by using a different base commit, commonly used to integrate changes and maintain clean project history.
- Git rebase configuration — Configuration options that set rebase as the default behavior for git pull operations, including branch-specific and global settings like branch.master.rebase and pull.rebase.
- Git rebase for commit squashing — Using interactive rebase to combine multiple commits into a single commit by changing 'pick' to 'squash' in the rebase editor, useful for cleaning up commit history before pushing.
- Git remote management commands — Essential Git commands for managing remote repositories including adding remotes, setting URLs, and verifying remote configurations with the -v flag
- Git reset and commit history manipulation — Commands and techniques for rewinding Git history, including hard resets to clean states, soft resets that preserve changes, and resetting to remote branches or specific commit hashes.
- git reset hard mode — A destructive Git reset operation using --hard flag that discards all staged and unstaged changes, described in the source as '後悔藥' (regret medicine) for undoing commits.
- Git single-branch cloning — Using the --single-branch flag with git clone to download only a specific branch rather than all repository history, reducing clone time and disk usage.
- Git stash workflow for temporary changes — The stash command set for temporarily saving work-in-progress changes, including saving, applying, popping, listing, and clearing stashed states without committing.
- Git tutorial for beginners — Accessible Git introduction guide designed for beginners and referenced in development learning resources
- Git upstream remote — The original repository from which a fork was created, tracked as a remote named 'upstream' to pull changes from the source project.
- Gitea — A lightweight, self-hosted Git service written in Go that provides a painless alternative to GitHub/GitLab for version control needs.
- Gitea server configuration — Configuration management for the Gitea self-hosted Git service, with official example configuration available in the repository's app.example.ini reference file.
- GitHub Actions — GitHub's CI/CD automation platform that enables workflow creation for building, testing, and deploying code directly within GitHub repositories
- GitHub Actions (CI/CD) — GitHub's continuous integration and continuous delivery automation platform for building, testing, and deploying code workflows.
- GitHub Actions deployment for Hexo — Automated deployment workflow using GitHub Actions (stored in .github/workflows/node.js.yml) to automatically build and deploy Hexo sites without manual intervention.
- GitHub Actions workflow structure — The hierarchical organization of GitHub Actions components: events trigger workflows, which contain multiple jobs, executed on runners (GitHub-hosted or self-hosted).
- GitHub filename search syntax — The special search operator 'filename:' followed by a quoted string pattern to locate files by their exact or partial names across GitHub repositories.
- GitHub Packages — A package hosting service integrated with GitHub that allows developers to publish, store, and manage software packages as part of their CI/CD workflows.
- GitHub Packages integration with GitHub Actions — The capability to automatically build and deploy packages (Maven, Gradle, Docker) to GitHub Packages registry through CI/CD workflows.
- GitHub Pages deployment — GitHub's hosting service for static websites that uses username.github.io repository naming convention, enabling free blog hosting without separate servers.
- GitHub Personal Access Token — A classic authentication method required by GitHub Packages for secure API access, with token permissions including read:packages scope, though tokens committed to repositories are automatically removed for security.
- GitHub project tracking — Using GitHub's project boards and issue tracking as a personal knowledge management tool to organize tasks, learning resources, and development projects in a kanban-style interface.
- GitHub Projects for Knowledge Organization — Using GitHub Projects' kanban board functionality as an organizational tool for managing personal knowledge base content and documentation workflows.
- GitHub Projects for knowledge tracking — Using GitHub Projects as a kanban-style board for organizing and tracking personal knowledge management tasks, learning resources, and documentation workflows.
- GitHub Projects for Personal Knowledge Management — Using GitHub Projects kanban boards to organize and track personal knowledge management tasks, documentation topics, and learning resources with structured workflow management.
- GitHub Projects integration workflow — The practice of maintaining TODO items in local documentation with explicit intent to migrate them to GitHub Projects for centralized project management and tracking.
- GitHub search features — The search capabilities built into GitHub for finding code, repositories, users, and specific information across the platform.
- GitHub search operators — Special query syntax and operators available in GitHub's search functionality to filter and refine searches across repositories, code, issues, and other GitHub resources.
- GitHub SSH authentication — The process of configuring and using SSH keys to authenticate with GitHub repositories securely without password prompts.
- GitHub workflow directory configuration — The required file path
.github/workflowswhere YAML workflow definition files must be placed in a GitHub repository to be recognized and executed by GitHub Actions. - GitHub-based Helm chart repositories — Using GitHub repositories to store and distribute Helm charts, as exemplified by the yudady/charts reference
- GitHub-hosted runners vs self-hosted runners — The two execution environment options in GitHub Actions: GitHub-provided virtual machines (specified with values like
ubuntu-latest) or custom self-managed infrastructure. - github-workflow-dispatch — The broader category of GitHub Actions event types that enable programmatic triggering of workflows, including both repository_dispatch and workflow_dispatch events with different capabilities.
- GitOps — An operational methodology that uses Git as the single source of truth for declarative infrastructure and application configurations, with automated synchronization to the target environment.
- GitOps and Infrastructure as Code — Operational methodology using Git as single source of truth for declarative infrastructure and application configurations, with tools like ArgoCD, Terraform, and Kustomize enabling automated synchronization and version-controlled deployments.
- GitOps deployment workflow — Infrastructure-as-code practice using Git as single source of truth where declarative configurations stored in Git repositories are automatically synchronized to target environments by tools like ArgoCD, enabling version-controlled, auditable deployments.
- GitOps-based configuration deployment — Using Apollo branch in version control for configuration-driven application updates, where code changes trigger Jenkins builds, creating new tagged container images that can be deployed to different environments by updating only image tags in Kubernetes manifests.
- Gitpod — A cloud-based development environment service that provides browser-based, ephemeral workspaces for GitHub, GitLab, and Bitbucket repositories, using VS Code in the browser and Docker images for reproducible configurations.
- GKE Blue-Green Upgrade Strategy — A Kubernetes cluster upgrade methodology that creates temporary node pools to safely migrate workloads between GKE versions with minimal downtime.
- GKE cluster credentials — Method using 'gcloud container clusters get-credentials' to configure kubectl access to Google Kubernetes Engine clusters, requiring project ID, cluster name, and zone
- GKE cluster zones and regions — Google Compute Engine zone selection (e.g., asia-east1-b, asia-east2-b) as default project settings and required parameter for cluster operations
- GKE upgrade workflow — A 14-step standardized procedure for upgrading Google Kubernetes Engine clusters using Terraform, including authentication, version selection, node pool creation, and post-migration cleanup.
- GNU Make Terraform Workflow Automation — Using GNU Make to create reusable commands for common Terraform operations like initializing, planning, and applying infrastructure changes
- Go backend development patterns — Practical backend development patterns and techniques in Go including WebSocket communication, RPC frameworks, MySQL integration, and multi-process service architectures.
- Go build tags — Go language feature for conditional compilation that enables or disables code sections at build time, used here to select between Drone editions and remove feature limitations.
- Go by Example — A hands-on tutorial and learning resource for the Go programming language that teaches through annotated example programs, with Chinese language localization available.
- Go development environment setup — Initial configuration steps for Go development including setting up VS Code with the Go extension plugin to enable language support, IntelliSense, and tooling integration.
- Go flag package — Go's standard library package for building command-line interfaces with support for subcommands, flags, and argument parsing.
- Go FlagSet subcommand pattern — A design pattern for building CLI tools with subcommands using flag.NewFlagSet() to create separate flag parsing contexts for commands like 'get' and 'add', enabling modular command-line interfaces.
- Go function syntax and patterns — Functions in Go can take multiple inputs and return multiple outputs, with support for named return parameters and the ability to define functions with specific purposes for easier testing.
- Go game backend architecture — A comprehensive backend implementation pattern using Go that incorporates high concurrency, WebSocket communication, RPC protocols, MySQL databases, and multi-process services for strategy games.
- Go HTTP handler pattern — The standard Go pattern for creating web server endpoints using http.HandleFunc to register functions that implement the http.ResponseWriter and *http.Request interface.
- Go JSON marshaling and unmarshaling — The process of converting between Go data types (structs) and JSON format using the encoding/json package, enabling data interchange between applications and external systems.
- Go language — A statically typed, compiled programming language designed for simplicity, efficiency, and strong support for concurrent programming, commonly used in DevOps and backend development.
- Go loop constructs — Go's for loop is the only loop construct, supporting three patterns: infinite loops (for {}), conditional loops (for i < n), and range-based iteration over collections (for x, item := range collection).
- Go module initialization — The
go mod initcommand initializes a Go module by creating a go.mod file, which defines the module path and manages dependencies for Go projects. - Go module system (go.mod) — Go's built-in dependency management and versioning system that defines project requirements and module metadata
- Go modules — Go's dependency management and module system using 'go mod init' to define module paths and manage imports, providing the foundational structure for Go applications.
- Go modules (go mod) — Go's built-in dependency management system for defining module paths and managing project dependencies through go.mod files.
- Go modules and dependency management — The Go module system for managing dependencies, defining module paths with 'go mod init', and importing external packages like go-redis into applications.
- Go modules and packages — Go's code organization system where packages are compiled source files in the same directory, and modules are collections of packages released together with a go.mod file declaring the module path.
- Go net/http package — Go's standard library package for implementing HTTP clients and servers, providing ListenAndServe for server creation, HandleFunc for route registration, and request/response handling interfaces.
- Go os.Args command-line argument access — Method for accessing command-line arguments in Go using the os.Args []string variable, which contains the program name and all arguments passed to the application.
- Go Programming Language Fundamentals — Statically typed compiled language featuring simple syntax, goroutines for concurrency, channels for communication, rich standard library (net/http), interface-based design, and practical applications in network programming and cloud-native development.
- Go Redis CRUD operations — Implementing create, read, update, and delete operations in Go using the go-redis/v8 library with JSON serialization for storing structured data as string values.
- Go struct definition for data modeling — Using Go struct types with exported fields (capitalized names) to model data entities that can be serialized to/from JSON.
- Go struct tags — Struct field annotations using backtick syntax that configure behavior such as JSON field mapping, enabling custom field name conversion during serialization.
- Go struct types — Composite data types in Go that group related variables into a single unit, used to define custom data structures like Customer with multiple named fields.
- Go variable declaration — Go supports multiple variable declaration syntaxes: var keyword with explicit typing, short declaration with :=, and inferred types, with guidance to minimize memory usage.
- go-module-init-and-dependency-management — Go modules initialized with 'go mod init module-name' create go.mod files for dependency tracking, with external packages added via 'go get package/path' and imported in source files.
- go-redis-client-library-v8 — The go-redis/redis v8 package provides a Redis client for Go applications with support for connections, basic operations (Set, Get, Keys, Ping), and Sentinel failover configurations.
- go-smtp-mock — Go SMTP testing library from mocktools that provides mock SMTP server functionality for testing email workflows without sending real messages, available as github.com/mocktools/go-smtp-mock/v2.
- go-smtp-mock library — Go package for mock SMTP testing (github.com/mocktools/go-smtp-mock/v2) that simulates SMTP server behavior for development and testing email functionality without actual email transmission
- Goal-Driven Execution principle — Transform ambiguous requirements into verifiable outcomes through a reproduce → fix → verify → stop workflow with clear success criteria rather than ending at implementation completion.
- God Nodes分析 — 识别知识图谱中最重要的节点(核心概念、关键类、设计决策),通过图拓扑分析找出理解整个代码库的关键入口点
- Google AdSense integration — The method of adding Google AdSense page-level advertisements to websites by embedding the adsbygoogle.js script and configuring the google_ad_client ID for ad display functionality.
- Google Analytics integration — The process of adding Google Analytics tracking to websites using the gtag.js framework, which includes obtaining a tracking ID and embedding the global site tag script in the site's footer or partial layout files.
- Google Cloud Storage File Management Pattern — Service abstraction layer for blob storage operations including document creation, directory listing, path generation with MD5 prefixes, and batch deletion for temporary file cleanup.
- GraalVM — A high-performance JDK distribution that provides advanced optimizations for Java applications, including native image compilation for faster startup and lower memory footprint.
- GraalVM native images — GraalVM's technology for compiling Java applications ahead-of-time into standalone native executables, enabling instant startup, lower memory footprint, and easier deployment without requiring a JVM.
- Gradle and Maven repository sharing — The practice of configuring Gradle to use the same local and remote Maven repositories as Maven builds, allowing dependency sharing between the two build systems.
- Gradle build lifecycle — The three-phase execution model (initialization, configuration, execution) that Gradle follows when processing builds, determining when scripts are evaluated and tasks are executed.
- Gradle file operations — File manipulation capabilities including CopySpec for copying files, FileTree for iterating directories, and file path resolution methods (project.file, project.files).
- Gradle metaClass dynamic extension — Groovy metaClass capability to dynamically add third-party functionality to Gradle project objects at runtime, enabling custom behavior without inheritance.
- Gradle Project API — Core org.gradle.api.Project interface providing file operations, dependencies, task creation, and multi-project configuration methods like allprojects, subprojects, and buildscript.
- Gradle Task configuration and execution — The distinction between configuration phase (task definition) and execution phase (doFirst/doLast), with task inputs/outputs enabling incremental builds and UP-TO-DATE checking.
- Gradle Task dependencies and ordering — Mechanisms for specifying task execution relationships using dependsOn, mustRunAfter, shouldRunAfter, and finalizedBy to control build workflow.
- Grafana — An open-source analytics and visualization platform that integrates with Prometheus to create dashboards and visual representations of metrics data, commonly deployed alongside Prometheus in Kubernetes environments.
- Grafana and Prometheus monitoring stack — A mainstream, mature monitoring solution combination where Prometheus collects metrics and Grafana provides visualization dashboards, often integrated with service mesh tools like Istio for unified observability.
- Grafana dashboard integration — Visualization platform deployment with plugin installation (kubernetes-app, clock-panel, piechart-panel, D3Gauge, natel-discrete-panel), Prometheus data source configuration using TLS client certificates, and Kubernetes cluster integration for resource monitoring dashboards.
- Grafana dashboards and plugins — Visualization platform with plugin ecosystem (kubernetes-app, clock-panel, piechart-panel, etc.) and pre-built dashboard import system for K8S cluster, node, deployment, container, and etcd monitoring
- Grammar-preserving compression rules — Six core compression principles: remove connective words, limit sentences to 2-5 words, use simple verbs (do/make/fix), be specific with numbers, prefer active voice, and preserve meaningful information (numbers, sizes, names, constraints).
- Graphify — AI编程助手工具,将代码库、文档、论文等异构文件转换为可查询的知识图谱,通过AST解析、本地转录和语义提取实现71.5倍token压缩
- Groovy documentation ecosystem — Official documentation structure and resources for Groovy 3.0.4, including language reference, API docs, and guides for developers learning and using Groovy
- Groovy programming language — An object-oriented programming language for the Java platform, featuring dynamic typing, closure support, and concise syntax while maintaining compatibility with Java
- Groovy syntax and features — Concise and expressive syntax enhancements over Java, including optional semicolons, string interpolation, native syntax for lists and maps, built-in regular expressions, and closures
- Groovy version 3.0.x — Specific release line of Groovy language (3.0.4 referenced) representing the 3.x major version with particular features, improvements, and compatibility considerations
- Groovy-Java interoperability — Groovy's ability to seamlessly integrate with Java code, libraries, and frameworks, allowing developers to leverage existing Java ecosystems while using Groovy's enhanced features
- groupingBy collector — Stream collector that groups elements by a classification function into a Map, with variants supporting downstream collectors and custom map suppliers.
- gRPC and HTTP dual-protocol authorization — Ext Authz service architecture supporting authorization checks over both HTTP (port 8000) and gRPC v2/v3 (port 9000) APIs for flexible integration with different client implementations.
- gRPC xDS (xDS API) — The gRPC implementation of the xDS (Any Discovery Service) protocol, allowing gRPC clients to receive dynamic configuration (listeners, routes, clusters) directly from service mesh control planes like Istio.
- grpc-agent injection template — Special Istio sidecar injection template that installs only the Istio pilot-agent component without Envoy proxy, enabling proxyless gRPC architecture while maintaining control plane connectivity.
- grunt-devtools — A Grunt plugin that integrates Grunt task runner functionality directly into Chrome DevTools, enabling build task management from within the browser's developer interface.
- Guaranteed QoS Pods — Highest-priority Kubernetes pod classification where every container has equal Request and Limit values for both CPU and memory, ensuring these pods are never killed or throttled unless exceeding their own limits.
- Guava BloomFilter — Google Guava library's implementation of Bloom filter providing string funnel support and merge operations (putAll) for combining multiple filters.
- Gulp file watching automation — Using gulp.watch to monitor specific file patterns and trigger dependent tasks when changes are detected, enabling hot-reload development workflows
- Gulp LiveReload Workflow — A development workflow automation pattern using Gulp with gulp-livereload and gulp-connect to automatically refresh the browser when source files change
- Gulp task configuration pattern — Pattern for organizing Gulp build tasks with separate functions for server startup, file watching, and browser reloading, composed into a default task
- gulp-connect — A Gulp plugin that provides a local development HTTP server with LiveReload integration for frontend development
- Hadoop ecosystem components — Core components of the Hadoop big data framework, including distributed storage (HDFS), computation (MapReduce), NoSQL database (HBase), analysis engines (Hive, Pig), data collection tools (Sqoop, Flume), web management (HUE), and workflow orchestration (Oozie).
- Hadoop on Windows configuration — Configuration process and environment setup for running Apache Hadoop on Windows operating systems, including HADOOP_HOME and hadoop.home.dir environment variables.
- HADOOP_HOME environment variable — Essential environment variable that points to the root directory of the Hadoop installation, required for Hadoop processes to locate necessary binaries and configuration files.
- hadoop.home.dir property — Hadoop configuration property that specifies the home directory location, often used to override or supplement HADOOP_HOME environment variable settings within Hadoop's configuration files.
- HandlerAdapter — Strategy interface that enables DispatcherServlet to invoke handlers regardless of their actual implementation type, supporting various handler types including HttpRequestHandler, Controller, and RequestMappingHandlerAdapter.
- HandlerExceptionResolver — Spring MVC components that handle exceptions thrown during request processing, with implementations including ExceptionHandlerExceptionResolver, ResponseStatusExceptionResolver, and DefaultHandlerExceptionResolver for different exception handling strategies.
- HandlerMapping — Spring MVC component responsible for mapping incoming web requests to appropriate handlers, with implementations including BeanNameUrlHandlerMapping for URL-to-bean mapping and RequestMappingHandlerMapping for annotation-based routing.
- HandlesTypes annotation — Servlet 3.0 annotation used with ServletContainerInitializer to specify which class types should be passed to the onStartup() method for processing.
- HashiCorp Learn tutorials — Interactive learning platform providing hands-on tutorials for HashiCorp tools including Terraform
- HCL (HashiCorp Configuration Language) — The domain-specific language used to write Terraform configuration files with .tf extension for defining infrastructure resources
- Header-based and Cookie-based Traffic Routing — Granular traffic routing mechanisms that direct requests to canary deployments based on HTTP header values or cookie matches, with support for regex patterns in headers and special handling for cookies.
- Header-based authorization check — Authorization validation method that checks for specific HTTP headers in incoming requests, such as
x-ext-authz: allow, to permit or deny traffic. - Headless Chrome — Chrome browser execution mode without visible UI using --headless flag, useful for server-side automation and CI/CD pipelines
- Headless Pattern (Claude Code) — Fully autonomous Claude Code execution using
claude -pflag for non-interactive operation, integrable with cron jobs and automation scripts for tasks with easily verifiable outputs. - Health Check — Mechanisms for verifying the operational status and viability of system components, typically used to determine if services are functioning correctly.
- Heapster Monitoring Integration — Legacy Kubernetes monitoring add-on that collects metrics and provides graphical visualization capabilities within the Dashboard interface, though noted as potentially inaccurate and optional for deployment.
- helloworld sample service — A simple Istio sample service with two versions (v1, v2) that returns its version and hostname when called, used for demonstrating version routing and canary deployments.
- helloworld sample service (Istio) — A dual-version Istio sample service that demonstrates version routing and canary deployments by returning its version and hostname.
- Helm (Kubernetes package manager) — A template engine for Kubernetes that enables creation of reusable, parameterized configuration templates to manage deployments across multiple environments (e.g., production vs. development) without duplicating YAML files.
- Helm chart configuration — A declarative configuration mechanism using values.yaml to customize Kubernetes deployments through parameters like image settings, resources, replicas, and service configurations.
- Helm Chart Configuration Parameters — Comprehensive set of configurable parameters including image settings (repository, tag, pullPolicy), replica count, resource limits, service type, ingress configuration, RBAC settings, and security contexts for both pod and container levels.
- Helm chart deployment — Kubernetes package manager using charts (collections of YAML templates) for complex application deployment, with value customization through values.yaml files and support for lifecycle management (install, upgrade, rollback) of releases.
- Helm chart deployment for monitoring stack — The practice of using Helm package manager to deploy and manage monitoring infrastructure (Prometheus, Grafana) on Kubernetes, providing reproducible installations and simplified configuration management.
- Helm Chart Installation and Uninstall — Standard Helm workflow for deploying and removing Kubernetes Dashboard using helm install and helm delete commands, with support for custom configurations via --set flags or YAML files.
- Helm chart pull command — Using
helm pullto download chart archives from repositories to local filesystem for inspection or offline deployment - Helm Chart Repository Management — Process of adding and configuring Helm chart repositories to access packaged applications, using commands like helm repo add to specify repository names and URLs.
- Helm chart value overriding — The practice of customizing Helm chart deployments by overriding default values, either through command-line flags or custom values files
- Helm configuration parameters for Metrics Server — Configurable deployment settings for Metrics Server including RBAC, service accounts, networking, resource limits, affinity, pod disruption budgets, and container image specifications.
- Helm deployment method — Package management approach for Kubernetes applications using
helm upgrade --installcommands with repository URLs, supporting namespace creation and version management. - Helm installation via Chocolatey — A method for installing the Helm package manager on Windows using Chocolatey package manager (choco install)
- Helm Package Manager for Kubernetes — Kubernetes package manager using charts (collections of YAML templates) for complex application deployment, with value customization through values.yaml files and support for lifecycle management (install, upgrade, rollback) of releases.
- Helm Release Lifecycle Management — Complete lifecycle operations for Helm charts including installation via helm install, deletion through helm delete, and status verification with deployment metadata and revision tracking.
- Helm Release Resource Configuration — Terraform resource specification for Helm releases containing deployment parameters including repository URL, chart name, namespace, timeouts, and custom value overrides through set blocks.
- Helm Value Customization — Mechanism for overriding default Helm chart values using set blocks to configure application-specific parameters such as service ports, replica counts, protocols, and RBAC settings.
- HELO vs EHLO Commands — SMTP greeting commands where HELO initiates standard SMTP communication while EHLO initiates Extended SMTP (ESMTP) with server capability advertisement; both use domain names validated against client IP addresses for anti-spam purposes.
- Hermes Agent — An open-source, self-hosted AI agent platform by Nous Research featuring closed learning loops that automatically generate and reuse skill documentation across sessions.
- Hermes Agent data directory structure (~/.hermes/) — Standardized storage location where Hermes Agent persists memory, sessions, skills, configuration, and runtime state that monitoring tools read directly.
- Hermes Agent integration with custom endpoints — Hermes Agent通过OpenAI兼容API端点集成自定义本地模型(如vLLM部署的Qwen),支持交互式配置和配置文件方式设置base_url、default_model、max_context_tokens,配置可传递给子Agent
- Hermes Agent v0.11 Architecture — Major structural upgrade introducing React+Ink TUI, pluggable Transport Layer, Plugin/Hook system, and Orchestrator-based multi-agent coordination—shifting Hermes from a monolithic tool to an extensible platform.
- Hermes data directory monitoring — Architecture pattern where monitoring tools read AI agent's persistent state from ~/.hermes/ directory, using file system watchers (watchfiles) and mtime-based caching to detect and broadcast changes via WebSocket
- Hermes HUD UI — Browser-based monitoring dashboard for Hermes Agent that provides real-time visualization of token usage, memory, skills, conversations, and internal state through 13 tabs.
- Hermes Plugin and Hook System — Extension points enabling interception and transformation of tool calls and results through pre-tool-call (blocking), transform_tool_result, transform_terminal_output, register_command, dispatch_tool, and shell hooks—supporting governance, audit, and policy control.
- Hermes TUI (Terminal User Interface) — React + Ink-based terminal interface with sticky composer, real-time streaming output, git branch status, per-round timing, and sub-agent spawn visualization—enabling mid-task decision making rather than post-task retrospection.
- Hermes TUI vs Web UI comparison — Two monitoring interfaces for Hermes AI: hermes-hud (terminal-based) and hermes-hudui (browser-based), both reading from ~/.hermes/ directory with Web UI offering additional features like command palette, real-time chat, and theme switching
- Hermes v0.8.0 Intelligence Release — Major April 2026 update introducing self-healing capabilities, background task notifications, dynamic model switching, and OAuth 2.1 support across 209 PRs and 82 issue fixes.
- Hexo command system — Core command-line interface for the Hexo static site generator, including commands for creating posts, managing drafts, publishing content, and generating pages.
- Hexo configuration (_config.yml) — The central configuration file in Hexo that controls site-wide settings including themes, deployment, plugins, and other core parameters that require attention when updating to new versions.
- Hexo deployment plugins — Essential Hexo plugins for blog functionality including tag clouds (hexo-tag-cloud), image handling (hexo-asset-image), live preview (hexo-browsersync), RSS feeds, and search capabilities (hexo-generator-searchdb).
- Hexo directory structure — The standard organization of a Hexo site including _config.yml for configuration, source/_posts for markdown articles, themes for site theming, and public for generated static files.
- Hexo image rendering with hexo-renderer-marked — A Hexo plugin (hexo-renderer-marked) required to properly render markdown images and prevent image disappearance issues in blog posts.
- Hexo layout system — Template structure within Hexo themes where layout.ejs defines the base structure and individual EJS files provide content for the body section.
- Hexo plugin ecosystem — Collection of npm packages that extend Hexo functionality including RSS feeds, sitemaps, search, tag clouds, image handling, and live browser reload during development.
- Hexo static site generator — A fast, Node.js-based static site generator designed for blogging, featuring Markdown support, theming, and plugin ecosystem for publishing to GitHub Pages.
- Hexo theme customization — A technique for modifying Hexo blog themes by editing layout partial files such as after-footer.ejs in the theme directory to inject custom scripts and tracking codes into site footers.
- Hibernate Oracle dialect configuration — JPA and Hibernate property settings for Oracle database compatibility, including dialect specification, SQL formatting options, and DDL auto-generation controls.
- Hierarchical context lifecycle independence — Parent and child ApplicationContext instances maintain independent lifecycle states through isActive(), allowing children to be closed and recreated without affecting parent contexts or siblings in the hierarchy.
- Hierarchical note organization — The practice of structuring notes into layers rather than flat structures, using linking relationships to create depth and context in personal knowledge systems.
- home-router-network-connection — The fundamental process of establishing network connectivity on home wireless routers, typically involving physical setup and wireless network authentication.
- Honcho Dialectic Modeling — User modeling technique that builds deepening understanding of individual users across sessions through dialectical analysis of interaction patterns
- honesty boundary pattern — Design principle where AI skills explicitly document their limitations—what cognition cannot be distilled (intuition, mutations), public-private divergence, and prediction failures—to set appropriate user expectations and maintain trust.
- Horizontal Pod Autoscaler (HPA) — Pod-level autoscaler that automatically adjusts deployment replica counts based on metrics from the Metrics Server, using the formula desiredReplicas = ceil[currentReplicas × (currentMetricValue / desiredMetricValue)] to calculate scaling needs.
- Horizontal Pod Autoscaler integration with Metrics Server — The integration pattern where Kubernetes Horizontal Pod Autoscaler consumes resource metrics from Metrics Server to automatically scale workload pods based on CPU/memory utilization.
- Host file alias configuration — Technique for mapping custom domain names to local loopback addresses in the hosts file to enable local development with realistic domain names.
- hostpath StorageClass — Docker Desktop's default StorageClass that stores volumes on node filesystem, suitable for single-node development environments to enable data sharing between Pods.
- HostPath Volume — Mounts a file or directory from the host Node's filesystem into a Pod. Provides powerful capabilities but carries security risks. Official recommendation: avoid when possible, limit scope to required files/directories, and mount as read-only.
- Hosts file configuration — Configuration practice for hostname-to-IP address mapping using the hosts file (/etc/hosts on Linux, C:\Windows\System32\drivers\etc on Windows), enabling hostname resolution across multiple networked machines.
- hosts檔域名配置 — 修改/etc/hosts檔案建立本地IP到域名的對映關係,實現本地域名解析,常用於開發環境測試自簽憑證的域名對應。
- Hot class reloading — The capability to reload modified class files while a JVM is running without restart, exemplified by tools like Spring Loaded that use Java Agent technology.
- HPA and VPA Auto mode incompatibility — Using Horizontal Pod Autoscaler with non-external metrics (CPU/memory) simultaneously with VPA in Auto mode causes conflicts because both modify pod resource requests—HPA needs stable requests for scaling calculations while VPA continuously changes them, requiring use of VPA Off mode with manual recommendation application instead.
- HPA and VPA incompatibility — HPA and VPA cannot be used together directly because VPA modifies resource requests while HPA relies on those same values for replica scaling decisions, creating conflicts unless using custom metrics or Multidim Pod Autoscaler.
- HPA metric target types — Three ways to specify target values for HPA metrics: Utilization (percentage of resource requests for CPU/memory), AverageValue (average quantity across all Pods), and Value (absolute target value for Object/External metrics).
- HPA metric types — Four categories of metrics that can trigger Horizontal Pod Autoscaler scaling decisions: Resource (CPU/memory), Pods (per-pod metrics), Object (Kubernetes object metrics like Ingress), and External (non-Kubernetes metrics from monitoring systems).
- HPA metrics evolution and API versions — HPA API has evolved rapidly from v1 (CPU-only) through v2, v2beta2 to current versions, with v2beta2 and later adding memory metrics support alongside CPU utilization, requiring developers to consult latest documentation.
- HPA prerequisites and Metrics Server — Horizontal Pod Autoscaler requires Metrics Server to be installed and operational in the Kubernetes cluster to collect resource metrics (CPU, memory) that serve as the basis for autoscaling decisions.
- HPA resource metric configuration — Configuration for autoscaling based on Kubernetes resource metrics (CPU, memory) using the Resource metric type with averageUtilization target as a percentage of requested resources.
- HPA scaling behavior policies — Configuration parameters that control the rate and stability of scaling operations, including stabilizationWindowSeconds to prevent replica flapping, and policies that define maximum percentage or pod count changes per periodSeconds.
- HPA stabilization window — A behavior configuration setting (stabilizationWindowSeconds) that prevents replica count flapping by using the maximum desired state value within a specified time interval when scaling down, ensuring more stable autoscaling behavior during fluctuating loads.
- HPA validation with load testing — Testing procedure using busybox wget load generator to simulate traffic and observe HPA scaling behavior in real-time via kubectl get hpa --watch, demonstrating automatic horizontal pod scaling based on CPU/memory metrics.
- HPA-VPA incompatibility with non-external metrics — Using HPA with non-external metrics (CPU/memory) alongside VPA in Auto mode creates conflicts because both modify Pod resource requests, leading to unpredictable behavior
- HQ&A 筆記法 — A three-step note-taking method for deep learning: Highlight key passages during reading, formulate questions where those highlights are the answers, then answer in your own words to ensure understanding.
- HTTP handler functions in Go — Functions with signature
func(ResponseWriter, *Request)that process incoming HTTP requests and write responses, registered via http.HandleFunc with URL patterns. - HTTP headers — Key-value metadata pairs in HTTP requests and responses, accessible via
r.Headerfor reading incoming headers andw.Header().Add()for setting response headers. - HTTP MessageConverter — Spring MVC's component for converting HTTP request/response bodies to/from Java objects, supporting content negotiation and customizable serialization formats
- HTTP method semantics in REST — The five primary HTTP verbs (GET, POST, PUT, PATCH, DELETE) and their correspondence to CRUD operations, including idempotency characteristics and expected return values
- HTTP methods (GET and POST) — HTTP method types for different operations: GET requests data from a server, while POST sends data to create or update resources, with proper method validation and status code responses.
- HTTP Public Key Pinning (HPKP) — A security mechanism that allows servers to specify which certificate authorities' public keys are trusted, providing defense against fraudulent certificates and man-in-the-middle attacks.
- HTTP response body preservation on error — The challenge of retrieving response body content from failed HTTP requests, as standard HTTP client behavior typically discards the body when status codes indicate errors, requiring custom handling to preserve diagnostic or error information.
- HTTP security headers — A collection of HTTP response headers that enhance web application security by defending against common vulnerabilities such as XSS, clickjacking, and MIME type sniffing.
- HTTP status codes — Standardized response codes indicating the outcome of HTTP requests, including 200 (success), 404 (not found), 400 (bad request), and 500 (internal server error)
- HTTP status codes for RESTful APIs — Standard HTTP response codes and their appropriate usage in REST APIs, including success codes (200, 201, 204), client errors (400, 401, 403, 404, 422), and server errors (500)
- HTTP Strict Transport Security — A web security mechanism that enforces secure HTTPS connections by instructing browsers to only interact with the server over encrypted connections, preventing protocol downgrade attacks.
- HTTP Strict Transport Security (HSTS) — Web security mechanism that enforces secure HTTPS connections by instructing browsers to only interact with the server over encrypted connections, preventing protocol downgrade attacks.
- HTTP tunnel — A networking technique that forwards HTTP traffic from a public URL to a local server port, enabling external access to applications running on localhost.
- http-handler-query-parameter-parsing-in-go — Extracting query parameters from HTTP requests in Go using r.URL.Query()["paramname"], which returns a string slice allowing optional parameter handling for conditional endpoints.
- Httpbin Service — An HTTP testing and debugging service that provides various endpoints for testing HTTP requests and responses, commonly used for experimentation with service mesh features.
- HTTPS Decryption — The process of configuring packet capture tools to inspect encrypted SSL/TLS traffic, typically requiring installation of trusted root CA certificates on the monitoring device.
- Hybrid Retrieval with RRF Fusion — Search architecture combining vector similarity (HNSW cosine), keyword search (tsvector + ts_rank), and Reciprocal Rank Fusion (RRF) with the formula score = sum(1/(60 + rank)), followed by multi-layer deduplication.
- Hyperlink-based Note Stratification — The practice of organizing notes into hierarchical layers using hyperlinks as the primary connection mechanism, rather than traditional folder structures
- I/O operation combinations — Four possible I/O models combining blocking/non-blocking with synchronous/asynchronous: synchronous blocking (traditional), synchronous non-blocking (polling), asynchronous blocking (impractical), and asynchronous non-blocking (optimal).
- IaC programming language approach — The practice of using general-purpose programming languages (such as JavaScript, Java, Python, Go) instead of YAML or other DSLs to define and manage infrastructure as code
- Idempotency in HTTP Requests — The property of certain HTTP methods where making the same request multiple times produces the same result as a single request, with GET, PUT, DELETE, and HEAD being idempotent while POST is not.
- Inbound vs Outbound Handlers — Two directional handler types in ChannelPipeline where inbound handlers process incoming events (read) in forward order (1→N) while outbound handlers process outgoing events (write) in reverse order (M→1).
- Incremental hardware scaling strategy — An approach recommending starting with available hardware resources and upgrading only when constraints are encountered, rather than over-provisioning initially.
- index-failure-conditions — Conditions that prevent MySQL from using indexes even when they exist, including column operations in WHERE clauses (like 'where a+1 = 5'), implicit type conversions, improper data type matching, and violations of the leftmost prefix principle.
- Infrastructure as Code (IaC) — Practice of managing and provisioning infrastructure through machine-readable definition files rather than manual configuration
- Infrastructure as code with Kubernetes manifests — Declarative approach to deploying Spinnaker components using Kubernetes YAML manifests (Deployment, Service, Ingress, ConfigMap) with NFS-backed persistent storage and health probes.
- Infrastructure automation orchestration — The practice of using tools to automatically provision, configure, and manage IT infrastructure resources, reducing manual intervention and ensuring consistency.
- Infrastructure service restart workflow — Standard DevOps pattern for applying configuration changes to services: disable service, modify configuration files, run daemon-reload, re-enable service, restart service, and verify status with systemctl
- Ingress — Kubernetes abstraction for HTTP/HTTPS layer-7 routing that acts as a global reverse proxy fronting multiple Services, enabling URL-based routing and TLS termination separate from layer-4 Services
- Ingress Controller — The actual implementation component (such as ingress-nginx) that reads Ingress resource configurations and processes the routing rules, typically deployed as a Deployment with LoadBalancer service exposing ports 80 and 443.
- Ingress controller configuration — Kubernetes ingress controllers manage external HTTP/HTTPS traffic routing to services, with configuration through YAML manifests defining routing rules, host-based virtual hosting, and TLS certificate handling for secure access.
- Ingress Controller Service Types — Ingress controller services can be exposed as LoadBalancer or NodePort types, with LoadBalancer attempting to provision external IP addresses and NodePort exposing ports on cluster nodes.
- Ingress default backend — Fallback routing configuration in an Ingress resource that specifies where to send traffic when no other rules match the request, typically used for single-service exposure or error handling.
- Ingress fanout pattern — Routing configuration that distributes traffic from a single IP address to multiple backend services based on URL paths or hostnames, enabling multiple services to share the same ingress endpoint.
- Ingress fundamentals in Kubernetes — Ingress is an API object that manages external access to cluster services, typically HTTP, providing load balancing, SSL termination, and name-based virtual hosting capabilities.
- Ingress NodePort configuration — Network exposure pattern where ingress-nginx-controller service uses NodePort type (ports 80:30035/TCP, 443:30603/TCP) to make the ingress controller accessible via worker node IPs.
- ingress resource creation with hostname routing — Creating Ingress resources using kubectl with --class and --rule flags to map hostnames to backend services, demonstrated with demo.localdev.me mapping to demo:80
- Ingress resource creation with kubectl — Creating ingress resources using kubectl create ingress command with class specification and routing rules, mapping hostnames to services using syntax like --rule=host/*=service:port.
- Ingress rule configuration — YAML-based configuration structure that defines routing rules including host, path patterns, pathType (Prefix/Exact), and backend service mappings to direct HTTP traffic to appropriate services.
- Ingress testing workflow — Validation process for ingress controller functionality using dedicated test resources (test-nginx-ingress.yaml) to verify routing, SSL termination, and service reachability
- ingress-nginx admission webhooks — ValidatingWebhookConfiguration resources for ingress-nginx that create and patch admission webhook jobs during deployment
- ingress-nginx controller installation via kubectl — Deployment method for ingress-nginx controller using static provider manifests with kubectl apply
- ingress-nginx controller LoadBalancer to NodePort conversion — Modifying ingress-nginx service type from LoadBalancer to NodePort for environments without cloud load balancer support, showing EXTERNAL-IP as
- IngressClass resource — IngressClass is a Kubernetes resource (ingressclass.networking.k8s.io/nginx) that specifies which ingress controller should handle Ingress resources, enabling multiple ingress controllers in the same cluster.
- Init containers — Specialized containers that run sequentially before the main container starts, used for initialization tasks, dependency waiting, and preparing shared resources via volumes
- InnoDB vs MyISAM storage engines — The two primary MySQL storage engines with key differences: InnoDB supports transactions, foreign keys, row-level locking, and uses clustered B+ tree indexes; MyISAM lacks transaction support, uses table-level locking, and stores indexes and data separately.
- Instructional WHAT-WHY framework — A documentation pattern that explicitly explains what is being done (WHAT) and the rationale behind it (WHY) to provide complete context for technical procedures.
- Instrumentation API — Java's instrumentation interface that provides hooks for inspecting and modifying class bytecode at runtime, enabling dynamic class redefinition and profiling.
- IntelliJ IDEA Code Style Configuration — Settings for controlling code formatting and import optimization in IntelliJ IDEA, including eliminating wildcard imports (import *), configuring code alignment, and managing code style preferences.
- IntelliJ IDEA Custom Configuration Paths — Method for customizing default directories for IDE config and system folders using idea.properties file to override default user.home locations.
- IntelliJ IDEA Font Size Control — Mouse-based font size adjustment in IntelliJ IDEA using Ctrl + mouse scroll wheel to dynamically increase or decrease editor text size for improved readability.
- IntelliJ IDEA Keyboard Shortcuts — Essential keyboard shortcuts for IntelliJ IDEA including project switching (Ctrl+Alt+[ or ]), auto-complete return value (Ctrl+Alt+V), automatic semicolon insertion (Ctrl+Shift+Enter), and inheritance tree viewing (Ctrl+Alt+U).
- IntelliJ IDEA Note Formatter — Custom formatter settings for modifying note or comment formatting behavior in IntelliJ IDEA, requiring configuration in two specific locations within the IDE settings.
- IntelliJ IDEA Plugin Ecosystem — A curated collection of essential productivity plugins for IntelliJ IDEA including development, debugging, code quality, and UI customization tools.
- IntelliJ IDEA Save Actions — Configuration for automatic code formatting and optimization on save, including visual indicators (asterisk on modified tabs) and auto-save timing settings that trigger cleanup actions during file saves.
- IntelliJ IDEA Settings Synchronization — The process of backing up and synchronizing IntelliJ IDEA configuration files to GitHub using repository integration and personal access tokens, enabling consistent development environments across machines.
- IntelliJ WebLogic deployment — Process and configuration for deploying Java web applications to WebLogic servers using IntelliJ IDEA as the development and deployment tool
- Intermediate certificate authority pattern — PKI design pattern where a subordinate CA (Citadel) holds signing authority to issue workload certificates while being itself certified by a root CA, separating operational certificate issuance from root trust management.
- Interruptibility Design Principle — System design ensuring API calls and tool executions can be cancelled at any time via user input (Ctrl+C, /stop command) or signals, enabling real-time responsiveness
- Inverted index — Core data structure used by Elasticsearch and underlying Lucene for efficient full-text search, mapping tokens to document locations to enable fast relevance-based retrieval.
- IP address detection methods — Techniques for determining the current public IP address of a system, using external services like API endpoints (ipify, ipinfo) or parsing router status pages to extract network information.
- iptables SNAT optimization for containers — Network optimization technique that prevents source address translation for internal cluster traffic by modifying iptables POSTROUTING rules to preserve container IP addresses in logs
- Istio — An open-source service mesh platform that provides traffic management, security, and observability features for microservices running on Kubernetes.
- Istio Automatic Sidecar Injection — A Kubernetes cluster configuration that automatically injects the Istio envoy proxy sidecar container into pods, removing the need for manual pod specification modifications.
- Istio Citadel CA Plugin Architecture — Configuration model where Istio Citadel operates as an intermediate certificate authority under a custom root CA rather than using self-signed certificates, enabling integration with existing PKI infrastructure.
- Istio Dashboard Collection — A suite of pre-configured Grafana dashboards for monitoring Istio service mesh health, including mesh overview, service-level metrics, workload breakdowns, performance monitoring, and control plane health visualization.
- Istio default external access limitations — Default configuration for external service access that excludes HTTP on port 80 and SSH on port 22, requiring additional configuration to enable these protocols
- Istio documentation — Official documentation resources for Istio service mesh, including installation guides, configuration references, and sample applications.
- Istio documentation guide — Official Istio project documentation providing instructions, guides, and best practices for deploying and managing service mesh infrastructure.
- Istio external service access configuration — The default Istio behavior of blocking external service access and the mechanisms for configuring outbound traffic through sidecar proxies using iptables redirection
- Istio external service access errors — Common failure modes when external service access is not properly configured, including 404 errors, HTTPS connection problems, and TCP connection failures
- Istio Gateway — A Kubernetes custom resource that defines ingress traffic entry points into the service mesh, working in conjunction with VirtualServices to enable protocol-specific routing and upgrades like WebSocket connections.
- Istio Ingress Gateway — A gateway mechanism that enables external traffic to enter the Istio service mesh, providing controlled access to internal services through defined routing rules.
- Istio manifest configuration — YAML configuration files that define Istio resources and policies for deploying service mesh capabilities in Kubernetes environments, including gateway, virtual service, and destination rule definitions.
- Istio mesh egress traffic — External service access from within the Istio service mesh, allowing applications to reach services outside the mesh through configured egress rules.
- Istio Proxyless gRPC — A service mesh integration mode where gRPC applications communicate directly with Istio control plane via xDS API without Envoy proxy sidecar interception, reducing network hops and latency.
- Istio Samples Directory — A collection of sample applications demonstrating Istio service mesh features, capabilities, and integration patterns for learning and reference purposes.
- Istio service deployment workflow — The standard process for deploying services into an Istio mesh: install Istio, inject sidecar proxy configuration into resource manifests, apply with kubectl, and clean up using kubectl delete.
- Istio service mesh — An open-source service mesh platform that provides a uniform way to secure, connect, and monitor microservices, extending Kubernetes with traffic management, security policies, and observability features.
- Istio Service Mesh Implementation — Complete service mesh solution providing traffic management through VirtualServices and Gateways, security with mTLS, observability via telemetry APIs, and automatic sidecar injection for microservices communication control.
- Istio ServiceEntry Configuration — Istio resource type used to define and manage external service access rules, where misconfiguration can result in server name resolution problems and connection failures.
- Istio Sidecar Bootstrap Override — The mechanism for customizing Envoy proxy initialization in Istio by using the
sidecar.istio.io/bootstrapOverrideannotation to specify a ConfigMap containing custom bootstrap configuration. - Istio Sidecar Injection — The mechanism of automatically or manually injecting Envoy proxy sidecars into application pods to integrate them into the Istio service mesh for traffic management and observability.
- Istio sidecar injection (istioctl kube-inject) — The process of modifying Kubernetes deployment specifications to include Istio's Envoy proxy sidecar containers, enabling services to participate in the service mesh.
- Istio Sidecar Injection Methods — The two approaches for injecting Istio Envoy sidecar proxies into Kubernetes pods: automatic sidecar injection configured at the namespace level, or manual injection using
istioctl kube-injectcommand. - Istio sidecar proxy egress traffic interception — How Istio uses iptables to transparently redirect all outbound pod traffic to the sidecar proxy, which by default only handles intra-cluster destinations
- Istio Telemetry Addons — A collection of optional but essential observability integrations for Istio service mesh including monitoring, visualization, and tracing tools that can be quickly deployed via Kubernetes manifests.
- Istio Telemetry API — Kubernetes custom resource (telemetry.istio.io/v1alpha1) for configuring how Istio collects and exports observability data including access logs, metrics, and traces
- Istio VirtualService — Istio's core traffic management resource for configuring routing rules within a service mesh, including advanced capabilities like WebSocket upgrade support for incoming ingress traffic.
- istio-staged-deployment — A progressive deployment pattern using Skaffold modules to install Istio components incrementally: base, istiod, ingress, Kiali, and sample applications like bookinfo.
- istioctl — The command-line interface tool for installing, configuring, and managing Istio service mesh, available through package managers like Chocolatey on Windows.
- istioctl proxy-config bootstrap — A diagnostic command for inspecting the actual bootstrap configuration being used by a specific pod's Envoy proxy instance, useful for debugging and verification.
- Iterative learning through replication — An educational approach acknowledging that copy-paste execution is a valid starting point for learning complex technical systems, with understanding developing through hands-on practice.
- Jaeger vs Zipkin for Istio — Comparison of two distributed tracing systems compatible with Istio: Jaeger (default deployment) provides comprehensive distributed tracing and context propagation, while Zipkin offers an alternative for gathering latency timing data but requires manual deployment configuration.
- Java 8 Stream API — A sequence of elements supporting sequential and parallel aggregate operations, introduced in Java 8 as part of the java.util.stream package for functional-style data processing.
- Java 8 Stream Collector — The mutable reduction operation in Java 8 streams that accumulates elements into a result container through supplier, accumulator, combiner, and optional finisher operations, contrasting with immutable reduce operations.
- Java Agent — A Java technology that allows bytecode manipulation at runtime, executing code before the main method through special JAR packaging with MANIFEST.MF configuration.
- Java Agent AOP implementation — Using Java Agent technology to implement Aspect-Oriented Programming patterns through bytecode manipulation, enabling cross-cutting concerns without explicit code changes.
- Java Collectors — Utility class providing common reduction and collection operations for Stream terminal operations including grouping, counting, summing, joining, and collection conversion.
- Java Cryptography Architecture (JCA) — Java's framework for providing cryptographic services and security APIs, forming the foundation for Java security implementations.
- Java Cryptography Extension (JCE) — Extension framework to JCA that provides additional cryptographic capabilities and enhanced security services for Java applications.
- Java Functional Interface — A Java interface with a single abstract method, enabling lambda expressions and method references as instances, serving as the foundation for functional programming constructs in Java 8+.
- Java functional interfaces overview (Consumer, Function, Predicate, Supplier) — Four core built-in functional interfaces in Java 8: Consumer (T→void), Function (T→R), Predicate (T→boolean), and Supplier (void→T), representing common functional programming patterns.
- Java I/O Stream hierarchy — Two-tier stream architecture using decorator design pattern: node streams (FileInputStream) for actual I/O and processing streams (BufferInputStream) that wrap and enhance functionality.
- Java Instrumentation API — Java's instrumentation interface that provides hooks for inspecting and modifying class bytecode at runtime, enabling dynamic class redefinition and profiling.
- Java Jigsaw — Java's module system introduced in Java 9 that provides strong encapsulation, reliable configuration, and scalable platform architecture through modular application design
- Java Memory Model (JMM) — The Java Memory Model defines how threads interact through memory and what behaviors are guaranteed by the JVM in concurrent programming contexts.
- Java method references — Syntactic shorthand for lambda expressions referencing existing methods or constructors using the double colon (::) operator, including static methods, instance methods, and constructor references.
- Java modular design — Architectural approach for organizing Java applications into self-contained modules with explicit dependencies, enabling better maintainability and clearer system boundaries
- Java module migration strategies — Techniques and best practices for transitioning existing monolithic Java applications to a modular architecture, including handling split packages, unnamed modules, automatic modules, and dependency management.
- Java NIO and Reactor Integration — Integration of Java Non-blocking I/O (NIO) channels and selectors with the Reactor pattern to achieve efficient, non-blocking I/O multiplexing in network programming.
- Java NIO components — Core building blocks of Java Non-blocking I/O: channels for I/O operations, buffers for data storage with position/limit/capacity tracking, and selectors (select/epoll) for multiplexing.
- Java NIO.2 Utilities — Utility classes in the java.nio.file package (Paths, Files) that simplify file I/O operations with static methods for path manipulation and file operations.
- Java Platform Module System (JPMS) — The formal module system introduced in Java 9, codenamed Jigsaw, that provides strong encapsulation, reliable configuration, and a scalable platform architecture through explicit module declarations and dependencies.
- Java RMI Serialization — Java Remote Method Invocation using native Java Serializable interface for object serialization in distributed Java applications.
- Java Runtime class — A Java API for interacting with the application runtime environment, providing methods for memory inspection, processor information, shutdown hook management, and JVM control.
- Java Security — A page stub about Java security topics with minimal content
- Java security architecture (JCA/JCE) — Java's cryptographic framework including Java Cryptography Architecture (JCA) and Java Cryptography Extension (JCE) for implementing encryption, digital signatures, and security operations in Java applications.
- Java SPI (Service Provider Interface) — Java's programming pattern where interfaces define contracts and third-party providers register implementations for runtime discovery, enabling pluggable architectures.
- Java Stream API — A functional-style data processing API introduced in Java 8 for declarative stream operations on collections and other data sources.
- Java Stream API and Functional Programming — Functional-style data processing API introduced in Java 8 for declarative stream operations on collections, supporting intermediate operations (filter, map) and terminal operations (collect, reduce) with lambda expressions and functional interfaces.
- Java Stream creation methods — Various ways to create Stream instances in Java 8 including Arrays.stream(), Collection.stream(), Stream.of(), and Stream.concat().
- Java Stream intermediate operations — Stateless or stateful transformations on streams that return a new Stream, including filter, map, flatMap, distinct, sorted, and limit operations.
- Java Stream reduction operations — Stream operations that combine elements into a single result using associative accumulation, including reduce(), summingInt(), averagingInt(), maxBy(), and collectingAndThen() patterns.
- Java Stream terminal operations — Operations that consume the stream and produce a result or side effect, including collect, forEach, reduce, count, min/max, and matching operations (allMatch, anyMatch, noneMatch).
- Java Utility Classes Pattern — A common Java naming convention and design pattern where singular class names (e.g., Object, Array) have corresponding utility classes with pluralized names (e.g., Objects, Arrays) that provide static helper methods.
- java.util.Arrays — Utility class offering static methods for array manipulation operations such as sorting, searching, comparing, filling, and converting arrays to lists.
- java.util.Collections — Utility class providing static methods for operating on collections, including sorting, searching, shuffling, creating unmodifiable/synchronized views, and singleton collections.
- java.util.Objects — Utility class providing static methods for operating on object instances, including null-checks, hash calculation, equality comparison, and toString generation.
- javaagent — A Java technology that allows bytecode modification at runtime before the main method executes, enabling instrumentation and code transformation without modifying source code.
- JavaScript execution in Selenium — Method to execute custom JavaScript within browser context using JavascriptExecutor.executeScript(), enabling advanced automation scenarios
- jcmd JVM diagnostic command tool — Java diagnostic utility for listing JVM processes and executing diagnostic commands like VM.command_line, GC.heap_dump, and Thread.print
- JDBC connection pooling — A database connection management technique where multiple connection objects are created and maintained in a pool, allowing applications to efficiently borrow and return connections instead of repeatedly opening and closing them.
- JDBC driver loading via ServiceLoader — How DriverManager uses ServiceLoader to automatically discover and instantiate JDBC driver implementations declared as service providers rather than manual Class.forName calls
- Jenkins backup and restore strategy — Procedures for backing up Jenkins data to cloud storage (GCS, S3) using Kubernetes CronJobs and restoring from backup using Kubernetes Jobs with tools like skbn.
- Jenkins Backup and Restore via CronJob — Automated backup system using Kubernetes CronJob with configurable schedules, cloud storage destinations (S3, GCS, Azure), security contexts, and credential management for Jenkins data protection.
- Jenkins BlueOcean插件 — Jenkins的可视化Pipeline插件,提供图形化的流水线界面,直观展示CD Pipeline的运行状态、分支查看和构建结果,增强CI/CD过程的可观测性和用户体验。
- Jenkins Configuration as Code (JCasC) — Declarative Jenkins configuration approach using YAML scripts for security, authorization, and system settings, with auto-reload sidecar support for dynamic configuration updates without pod restarts.
- Jenkins container resource configuration — Best practices for configuring Java memory settings (Xms, Xmx) and Kubernetes resource requests/limits for Jenkins containers, including timezone configuration for Asia/Shanghai
- Jenkins Controller StatefulSet Configuration — StatefulSet-based Jenkins controller deployment with customizable image, resource allocation (requests/limits), security contexts, service exposure (ClusterIP/NodePort/LoadBalancer), and health probe configuration.
- Jenkins Helm Chart — Kubernetes Helm chart for deploying Jenkins with containerized agents, providing automated installation, configuration, and lifecycle management of Jenkins infrastructure on Kubernetes clusters.
- Jenkins Helm Chart Configuration Parameters — Comprehensive configuration reference for deploying Jenkins on Kubernetes via Helm, covering controller settings, agent configurations, persistence, RBAC, networking, and monitoring parameters with their default values.
- Jenkins Kubernetes Agent Pod Templates — Kubernetes plugin configuration for Jenkins build agents using pod templates with customizable containers, resources, workspace volumes, and sidecar patterns for distributed CI/CD execution.
- Jenkins Kubernetes deployment — Deployment of Jenkins as a containerized application within a Kubernetes cluster using namespace, deployment, service, and ingress resources
- Jenkins Kubernetes networking — Service and Ingress configuration for exposing Jenkins on Kubernetes, including HTTP port 8080 and agent communication port 50000 through an nginx ingress controller
- Jenkins Kubernetes plugin agent spawning — The mechanism by which Jenkins dynamically creates and manages build agent pods in Kubernetes to execute pipeline jobs, including configuration of API connections and pod templates.
- Jenkins NetworkPolicy Security Configuration — Kubernetes NetworkPolicy resources for controlling Jenkins controller access from internal and external agents, with IP CIDR whitelisting, pod label filtering, and namespace-based traffic policies.
- Jenkins Persistence and Storage Configuration — Persistent Volume Claim (PVC) configuration for Jenkins home directory with storage class selection, access modes (ReadWriteOnce), sizing, and support for existing claims or additional volume mounts.
- Jenkins persistent volume configuration — Configuration strategies for Jenkins data persistence in Kubernetes using PersistentVolumeClaims, including handling of storage classes, existing claims, and volume mount timeout issues.
- Jenkins Pipeline五步构建法 — 标准化的Jenkins流水线构建流程:pull(拉取代码)→build(Maven编译)→package(打包JAR)→image(构建Docker镜像)→push(推送到Harbor仓库),实现从源代码到容器镜像的自动化转换。
- Jenkins RBAC and ServiceAccount Configuration — Role-Based Access Control setup for Jenkins with separate service accounts for controller and agent pods, configurable secret read permissions, and automated service account creation.
- Jenkins recommended plugins — Core plugins that should be installed with Jenkins including docker, docker-pip, blue-ocean, and k8s to enable containerized builds, improved UI, and Kubernetes integration
- Jenkins security realm and authorization strategies — Configuration options for Jenkins authentication (local users, LDAP, OIDC) and authorization (role-based access, matrix permissions) typically managed through JCasC.
- Jenkins容器化部署 — 在Kubernetes中部署Jenkins CI/CD服务,通过自定义Docker镜像集成Maven、Git凭证、Docker CLI和SSH密钥,挂载NFS持久卷保存Jenkins Home数据,实现容器内的代码构建、镜像打包和推送能力。
- Jest auto-configuration — Spring Boot includes JestAutoConfiguration for automatic setup of Jest HTTP client for Elasticsearch
- JIRA project management and filtering — Atlassian JIRA tool for issue tracking and project management, with emphasis on filter functionality for organizing and viewing tasks
- JMX monitoring for Java applications — Integration of jmx_prometheus_javaagent for exposing JVM and application metrics via HTTP endpoint on configurable port (default 12346), enabling monitoring of Tomcat and other Java services in Prometheus
- JNDI tree lookup pattern — The practice of registering resources such as DataSources in a JNDI (Java Naming and Directory Interface) tree structure, enabling applications to locate and access these resources through directory lookup operations.
- jQuery Form Plugin ajaxSubmit — AJAX form submission technique using jQuery Form Plugin with fieldSerialize and ajaxSubmit methods for handling multipart form data asynchronously
- jQuery Validation Plugin integration — Client-side form validation library that validates form fields before submission, with rules definition and submitHandler callback for custom submission logic
- JRE8基础镜像制作 — 基于jre8u112制作包含Prometheus JMX监控agent、时区配置和entrypoint启动脚本的JRE基础镜像,作为Java应用容器化的运行时基础环境,支持通过环境变量动态配置JVM参数和JAR包名。
- JSBin — A web-based frontend code editor that enables direct browser editing and live preview of web code, commonly used for rapid prototyping and code sharing.
- JSFiddle — An online frontend development environment providing in-browser code editing capabilities for creating, testing, and sharing web code snippets.
- JSON data structure syntax — JSON format uses curly braces {} for objects with key-value pairs separated by colons, and square brackets [] for arrays/lists, requiring both keys and string values to be quoted
- JSON marshaling and unmarshaling in Go — Converting Go structs to JSON bytes using
json.Marshal()for API responses, and converting JSON request bodies to Go structs usingjson.Unmarshal()with error handling for invalid data. - JSON serialization with dict — Using an object's dict property converts Python class instances to dictionaries, making them compatible with JSON serialization
- JSON syntax and data structure rules — JSON uses curly braces {} for objects with key-value pairs (separated by colons) and square brackets [] for arrays/lists, requiring all keys and string values to be quoted
- JSON use cases in software engineering — JSON serves as a data interchange format for HTTP web APIs, configuration files, database storage, caching systems, and infrastructure configuration in DevOps contexts
- JSON-based data persistence — Using JSON files for simple application data storage, reading with json.loads() and writing with json.dumps() for customer records
- JSON-based data persistence pattern — Application data storage pattern using JSON files for persistence, with read/write functions that convert between Python dictionaries and JSON format
- JVM Generational Model — The memory management architecture in Java Virtual Machine that divides heap memory into different generations based on object age, optimizing garbage collection efficiency
- JVM process detection using jps and jinfo — Method for identifying and monitoring specific Java Virtual Machine instances by combining jps process listing with jinfo configuration inspection
- K8S自愈与故障恢复 — Kubernetes的自动故障恢复机制,当节点宕机时,Deployment Controller会自动在其他健康节点上重新调度Pod,运维人员只需删除离线节点标记,集群即可实现服务自愈和负载重平衡。
- Kafka broker configuration — Essential server.properties settings for running a Kafka broker cluster, including broker.id, log directories, partition counts, and ZooKeeper connection strings
- Kafka cluster ZooKeeper integration — The dependency relationship between Kafka brokers and ZooKeeper for cluster coordination, service discovery, and metadata management through connection strings specifying multiple zookeeper host:port pairs.
- Kafka console operations — Command-line interface utilities for creating producers (kafka-console-producer.sh) and consumers (kafka-console-consumer.sh) to interact with Kafka topics directly from the terminal for testing and debugging.
- Kafka manager deployment — Kafka cluster management tool built from scala-sbt base image with ZooKeeper connectivity (ZK_HOSTS), deployed in Kubernetes with Ingress exposure (km.od.com:9000) for topic and consumer group monitoring.
- kaniko — A container image building tool designed to run inside Kubernetes pods without requiring Docker daemon, solving the Docker-in-Docker problem.
- Key Bound Operations — A category of RedisTemplate operations that are bound to a specific key, allowing for chained operations and more fluent API interaction with Redis data structures.
- Key Type Operations — A category of RedisTemplate operations that are grouped by Redis data type (String, Hash, List, Set, Sorted Set, etc.), providing type-specific methods for data manipulation.
- keyboard-shortcut-workflow — A productivity pattern using keyboard shortcuts to rapidly capture and process information without disrupting flow, exemplified by Alt+X and Ctrl+X sequences for text selection and copying.
- kiaki — A tool or project mentioned in a DevOps context (2022-11), but the source document provides no substantive information about its nature, purpose, or functionality.
- kiaki documentation template — A structured documentation format with sections for origin (緣起), description (是什麼), download location (去哪下載), and usage instructions (怎麼玩), representative of Chinese-language technical documentation patterns.
- Kiali Observability Console — An Istio-specific observability and management tool that infers service mesh topology, provides health metrics, integrates with Grafana for advanced queries, and supports distributed tracing through Jaeger integration.
- Kilo CLI and KiloClaw agent frameworks — 两个AI Agent工具:Kilo CLI是VS Code集成的命令行工具支持自定义OpenAI兼容端点,KiloClaw是托管式Agent服务无需自建基础设施,两者均支持Qwen模型
- kind (Kubernetes in Docker) — A tool for running local Kubernetes clusters using Docker container nodes, enabling quick development and testing environments without requiring full infrastructure setup.
- KinD cluster setup with MetalLB — A bash script automation for creating Kubernetes clusters using KinD (Kubernetes in Docker) with MetalLB load balancer support on Linux systems
- kind local Kubernetes cluster — Tool for running local Kubernetes clusters using Docker containers as nodes, providing a development environment for testing Kubernetes deployments without requiring full cluster infrastructure.
- KinD prerequisites and dependency management — Required toolchain including kubectl, kind, and docker that must be pre-installed before cluster provisioning
- Knowledge web — A network structure formed by connecting individual notes through hyperlinks, enabling hierarchical organization and non-linear navigation between related concepts
- Knowledge web structure — A network-based organization model where notes are connected through hyperlinks, enabling non-linear navigation and emergent connections between related concepts.
- Koa Framework — A lightweight Node.js web framework designed by the team behind Express, featuring a more modern middleware architecture using async/await and greater modularity
- kpack — A Kubernetes-native build service for creating OCI container images through declarative configuration and automated build pipelines.
- kube-apiserver — The front-end component of Kubernetes control plane that exposes the Kubernetes API, handles REST calls and kubectl commands, and serves as the central entry point for all cluster operations
- Kube-apiserver request pipeline — The three-stage security process that all Kubernetes API requests must pass through: Authentication (identity verification), Authorization (permission checks), and Admission Control (resource validation)
- kube-apiserver RESTful API integration — The architectural relationship where kubectl commands serve as a command-line wrapper around Kubernetes kube-apiserver's RESTful API, with CLI operations translating to underlying API calls for cluster management.
- kube-scheduler — Kubernetes component responsible for assigning newly created Pods to the most suitable Worker Nodes based on filtering and scoring policies while monitoring all cluster nodes
- kube-state-metrics — Kubernetes resource metrics exporter that collects data on most built-in K8S resources (pods, deployments, services, etc.) and provides statistics on resource collection counts and anomalies, deployed via DaemonSet in kube-system namespace
- kubeconfig structure — YAML configuration file organizing Clusters (API endpoints and certificates), Contexts (cluster-user-namespace mappings), and Users (authentication credentials) for kubectl operations
- kubectl — Kubernetes command-line tool that wraps the kube-apiserver's RESTful API for managing cluster operations and resources
- kubectl apply vs create — Two kubectl commands for resource creation: 'create' can only create new resources and fails if they exist, while 'apply' performs declarative updates by comparing desired state against existing resources and applying changes incrementally.
- kubectl auth can-i — Diagnostic command for querying Kubernetes API authorization review to determine whether a user or service account has permission to perform specific actions on resources, useful for permission verification and debugging.
- kubectl basic commands — Core kubectl operations for managing Kubernetes resources: apply (create/update from files), describe (detailed resource status), get (list resource information), create (create resources), delete (remove resources), run (create and run containers), expose (create services), set (configure resources), exec (execute commands in containers), and logs (view container logs)
- kubectl binary installation workflow — Manual process for downloading and installing the kubectl command-line tool from official Kubernetes releases, including version checking and binary placement in system path.
- kubectl cluster verification commands — Essential kubectl commands for verifying local Kubernetes installation: kubectl cluster-info for cluster endpoint information, kubectl get nodes for node status, and kubectl version for client/server version compatibility checking.
- kubectl cluster-info — kubectl 命令用於取得 Kubernetes 集群的資訊,包括控制平面和服務的端點地址。
- kubectl command interface — The primary command-line tool for managing Kubernetes clusters, supporting operations like resource creation (kubectl create/apply), inspection (kubectl get/describe), and networking (kubectl port-forward) for local-to-cluster access.
- kubectl command patterns — Essential command-line patterns for Kubernetes cluster management including resource inspection, creation, deletion, scaling, and service exposure operations
- kubectl command syntax — Standard kubectl command structure: kubectl [command] [TYPE] [NAME] [flags], where command specifies operations like create/get/describe, TYPE refers to resource types (pod, service, deployment), NAME identifies specific resources, and flags specify optional parameters
- kubectl config commands — Set of CLI utilities for managing kubeconfig including set-context, use-context, current-context, set-cluster, set-credentials, and unset operations
- kubectl config set-credentials — Command for configuring kubeconfig credentials using tokens, allowing persistent authentication for kubectl commands without repeated token entry.
- kubectl config set-credentials with token — Kubectl credential configuration command that stores a service account token in the kubeconfig file for authentication, using set-credentials with the --token flag.
- kubectl configuration management — Tools for viewing kubeconfig settings ('kubectl config view', 'kubectl config get-contexts') to debug and verify Kubernetes cluster connection state
- kubectl context management — Mechanism for managing multiple cluster configurations through contexts, using commands like current-context, get-contexts, and use-context to switch between different Kubernetes clusters or namespaces.
- kubectl create vs kubectl apply — The distinction between
kubectl createfor explicitly creating resource objects versuskubectl applyfor declaratively managing resources with YAML configuration files. - kubectl delete — Command for removing Kubernetes resources such as pods from a cluster
- kubectl deployment management — Command-line operations for creating, inspecting, and deleting Kubernetes deployments using kubectl commands including create, get, and delete.
- kubectl describe pod — Diagnostic command that outputs comprehensive detailed information about a specific pod including status, containers, volumes, conditions, and recent events
- kubectl exec for container access — A kubectl command that enables interactive terminal access to running containers within a pod, supporting flags for container selection, namespace specification, and TTY allocation.
- kubectl expose command — A command-line utility for creating a Service to expose a Deployment, supporting parameters like --port, --target-port, and --type to configure service behavior quickly.
- kubectl get pods — Command used to list and display the status of all pods within a specified Kubernetes namespace
- kubectl namespace context configuration — Methods to specify namespaces in kubectl commands including per-request flags (--namespace), context-based default settings, and Pod manifest metadata fields
- kubectl persistent volume claims command — A Kubernetes CLI command used to list and view all PersistentVolumeClaim (PVC) resources in a cluster, showing storage requests made by applications.
- kubectl persistent volumes command — A Kubernetes CLI command used to list and view all PersistentVolume (PV) resources in a cluster, showing the storage infrastructure available for applications.
- kubectl pod port-forwarding — A Kubernetes feature and CLI command that creates a secure tunnel to forward traffic from a local port to a specific pod's port, enabling local access to cluster-internal services.
- kubectl pods listing across namespaces — The use of 'kubectl get pods -A' to list all pods across all namespaces in a Kubernetes cluster, with the '-owide' flag providing additional details like node and IP information.
- kubectl port-forward — A networking utility that maps local ports to Kubernetes Pod ports, enabling direct localhost access to cluster services for development and debugging without exposing services externally.
- kubectl port-forward address binding — The --address 0.0.0.0 flag in kubectl port-forward binds the forwarding to all network interfaces, making the forwarded port accessible from any network interface on the host machine rather than just localhost.
- kubectl port-forward for MySQL access — Using kubectl port-forward to expose a MySQL pod's port 3306 to localhost for direct database access and management.
- kubectl port-forwarding — Technique for forwarding local ports to Kubernetes services, enabling local access to cluster-hosted applications via kubectl port-forward with address binding options
- kubectl port-forwarding for service access — A Kubernetes technique for creating secure local tunneling to cluster services, used to access the ArgoCD UI locally via kubectl port-forward.
- kubectl proxy — Kubectl command that creates a proxy server between your local machine and the Kubernetes API server, enabling access to cluster services through localhost URLs like http://localhost:8001/api/v1/namespaces/...
- kubectl proxy dashboard access pattern — A method for accessing Kubernetes Dashboard through kubectl proxy at localhost:8001/ui, which expands to a full API proxy URL, requiring the service to be named 'kubernetes-dashboard' (configurable via fullnameOverride).
- kubectl proxy for dashboard access — Access method for Kubernetes Dashboard using kubectl proxy to create a local proxy server (typically on 127.0.0.1:8001) that forwards API requests to the cluster.
- kubectl resource targeting — Methods for specifying multiple resources in kubectl commands including grouping by type (TYPE1 name1 name2), mixed type specification (TYPE1/name1 TYPE2/name3), and file-based specification (-f file.yaml -f directory).
- kubectl run — Command-line instruction for creating and deploying pods in a Kubernetes cluster using a specified container image
- kubectl Token Retrieval Pattern — A command pattern using kubectl with jsonpath and go-template to extract and base64 decode service account tokens from Kubernetes secrets
- kubectl top command — A Kubernetes CLI command that displays real-time resource usage information for nodes (kubectl top node) and pods (kubectl top pods) by querying the metrics API provided by Metrics Server.
- kubectl troubleshooting errors — Common connection issues when using kubectl including 'No connection could be made' (localhost connection refused) and timeout errors connecting to cluster endpoints
- kubectl wait for pod readiness — Using kubectl wait with --for=condition=ready selector to block until ingress controller pods are ready, with timeout configuration
- KubeKey — A Kubernetes cluster installation tool developed by KubeSphere that simplifies the deployment of Kubernetes and KubeSphere through a single binary and command-line interface, handling environment checks, component downloads, and cluster provisioning.
- KubeKey Environment Validation — Pre-installation check performed by KubeKey that verifies the presence and compatibility of system tools including sudo, curl, openssl, docker, NFS client, and storage clients before proceeding with cluster installation.
- kubelet — A lightweight agent running on each worker node that communicates with the control plane to ensure containers are running within pods, executing operations requested by the master node.
- kubelet-insecure-tls flag — A Metrics Server configuration parameter that disables TLS certificate verification when communicating with kubelets, required for local Docker-Desktop Kubernetes environments that lack proper certificates.
- Kubernetes (k8s) — An open-source, portable container orchestration platform for managing containerized workloads and services, featuring declarative configuration and automation capabilities.
- Kubernetes API version evolution and deprecation — Kubernetes releases approximately every three months with rapid API deprecation, making online tutorials quickly outdated and requiring developers to consult latest documentation and source code for API changes.
- Kubernetes API Version Management — Best practices for handling Kubernetes' rapid release cycle (approximately 3 months) and API deprecation, emphasizing the need to consult current documentation and source code when following online tutorials.
- Kubernetes architecture components — The master and node architecture including API Server (core operations hub), etcd (cluster state storage), Controller Manager (state maintenance), Scheduler (Pod assignment), kubelet (node agent), and kube-proxy (service load balancing with IPVS).
- Kubernetes Authentication — Identity verification mechanisms controlling access to Kubernetes clusters, distinguishing between normal users and service accounts with multiple verification methods including X.509 certificates, tokens, OpenID, and webhooks.
- Kubernetes AutoScaling — Automated resource configuration system that monitors resource utilization metrics and performs horizontal, vertical, or multi-dimensional scaling to respond to system load fluctuations, requiring pre-configured resource requests and Metrics Server.
- Kubernetes autoscaling behavior policies — HPA configuration parameters that control the rate of scaling through scaleUp and scaleDown policies, which can limit replica changes by percentage or absolute pod count per time period (periodSeconds) and select the maximum policy when multiple constraints apply.
- Kubernetes Autoscaling Mechanisms — Three-dimensional autoscaling capabilities including Horizontal Pod Autoscaler (HPA) for replica scaling, Vertical Pod Autoscaler (VPA) for resource tuning, and Cluster Autoscaler for node provisioning, with Metrics Server as foundation.
- Kubernetes AutoScaling prerequisites — Foundation requirement that Metrics Server must be installed and operational before implementing Kubernetes autoscaling features, as autoscaling decisions depend on real-time resource metrics.
- Kubernetes Blue/Green Deployment with Service Selector Switching — A zero-downtime deployment strategy using Kubernetes Services where two versions run simultaneously and traffic switches via label selector updates from version v1 to v2 pods, followed by cleanup of old version resources.
- Kubernetes building blocks — The core architectural components of Kubernetes that must be understood to port existing infrastructure, including deployments, services, configmaps, secrets, statefulsets, storage classes, and ingress.
- Kubernetes capabilities for container management — Kubernetes provides elastic distributed system framework for containers including scaling, failover, automated deployment, rolling updates, rollback, monitoring, and Infrastructure as Code through declarative configuration.
- Kubernetes certificate management with CFSSL — Certificate authority setup using CloudFlare's CFSSL toolkit for issuing TLS certificates, including CA certificates, peer certificates (for etcd), server certificates (for API server), and client certificates (for kubelet/kube-proxy), with proper JSON configuration files defining signing profiles
- Kubernetes cluster deployment workflow — Step-by-step enterprise deployment process covering DNS (Bind9) setup, Docker environment preparation, Harbor private registry, etcd cluster, master components, and worker nodes with supervisor process management.
- Kubernetes cluster reset and cleanup — Using kubeadm reset to revert failed installations by cleaning up cluster state, removing certificates, and restoring nodes to pre-installation conditions.
- Kubernetes compute resource units — Kubernetes abstracts hardware resources into CPU units (cores/vCPUs/Hyperthreads) and Memory units (bytes with E/P/T/G/M/K suffixes) for container resource allocation
- Kubernetes ConfigMap — A Kubernetes object used to store non-sensitive configuration data as key-value pairs or files, which can be mounted as volumes into containers
- Kubernetes ConfigMap integration with Apollo — Using ConfigMaps to decouple container images from configuration files, enabling environment-specific configuration injection into Apollo services through volume mounts rather than baking configuration into container images.
- Kubernetes ConfigMap volume mounting — Technique for decoupling application configuration from container images by mounting ConfigMap data as files into specific container directories, enabling configuration portability and reusability across environments.
- Kubernetes ConfigMaps and Secrets — Kubernetes configuration management mechanisms for separating application configuration and sensitive credentials from pod specifications, enabling environment variable injection and secure credential management.
- Kubernetes container probes — Health check mechanisms (liveness, readiness, startup probes) that use four detection handlers (Exec, TCPSocket, HTTPGet, gRPC) to monitor container state and trigger recovery actions
- Kubernetes Context — Client-side configuration alias that groups cluster connection details, user credentials, and default namespace for simplified multi-cluster and multi-environment kubectl operations
- Kubernetes Control Plane Components — The cluster's central nervous system (master node) containing kube-apiserver, kube-scheduler, kube-controller-manager, and etcd, which collectively manage cluster state, handle API requests, schedule pods, run controllers, and store configuration data.
- Kubernetes core components — The fundamental building blocks of Kubernetes including Pods, Services, Deployments, and Ingress that form the 'three brothers' architecture
- Kubernetes core concepts — Fundamental K8S abstractions including Pod (atomic container unit), Pod controllers (Deployment, StatefulSet, DaemonSet, etc.), namespace isolation, labeling, Service discovery, and Ingress for L7 routing.
- Kubernetes CPU and Memory Resource Model — Core resource types in Kubernetes: CPU (compressible resource where pods starve but don't exit when insufficient) and memory (incompressible resource where pods get killed via OOM when insufficient). Resource configuration happens per-container, with pod-level values being the sum.
- Kubernetes CPU and Memory Resources — Abstracted computing resources in Kubernetes where CPU is measured in cores (vCPU/Core/Hyperthread) and Memory in bytes with suffixes (E/P/T/G/M/K), used as the basis for Request/Limit specifications.
- Kubernetes custom resources — Extensible Kubernetes API objects that are not part of the core Kubernetes installation but enable modular functionality like Metrics Server, VPA, and other autoscaling components to be added to the cluster.
- Kubernetes custom resources for autoscaling — Metrics Server, VPA, and other autoscaling components are implemented as Kubernetes custom resources rather than core components, enabling modular architecture but requiring separate installation and contributing to rapid API evolution.
- Kubernetes Dashboard — A web-based UI for Kubernetes clusters that provides application management, troubleshooting capabilities, and cluster administration through a general-purpose interface.
- Kubernetes Dashboard Authentication — The process of configuring service accounts, cluster role bindings, and retrieving JWT bearer tokens to authenticate with the Kubernetes Dashboard web UI
- Kubernetes Dashboard Deployment — Specific deployment example using Helm to install the Kubernetes Dashboard, a web-based UI for cluster management, configurable via Helm values for service type, replicas, and RBAC settings.
- Kubernetes Dashboard Helm Chart — A Helm package manager chart for deploying Kubernetes Dashboard, a web-based UI for managing Kubernetes clusters, applications, and troubleshooting cluster issues.
- Kubernetes Dashboard Ingress Configuration — The challenges and SSL certificate requirements when configuring Kubernetes Dashboard access through Ingress controllers instead of direct NodePort or service exposure
- Kubernetes Dashboard installation via kubectl apply — Deployment method for installing Kubernetes Dashboard using kubectl apply with a remote YAML manifest from the official repository, specifically version v2.5.1.
- Kubernetes Dashboard major version upgrade from 1.x to 2.x — Breaking changes introduced in Kubernetes Dashboard 2.0.0 requiring manual migration actions, including removal of clusterAdminRole, new RBAC management, renamed security context parameters, updated label schemes, and changes to login-related parameters.
- Kubernetes Dashboard metrics scraper — An optional companion component (kubernetesui/metrics-scraper v1.0.4) that can be enabled alongside the Kubernetes Dashboard to collect and present cluster metrics, with its own configurable security context and deployment parameters.
- Kubernetes Dashboard Plugin — A web-based UI for Kubernetes clusters that provides visualization of metrics, KPIs, and cluster state with role-based access control for safe management without direct host access.
- Kubernetes Dashboard RBAC security model — Role-based access control requirements for Kubernetes Dashboard version 2.x, which removed the dangerous clusterAdminRole parameter, requires explicit secret creation, and enforces minimal privileges for ServiceAccounts to enhance cluster security.
- Kubernetes Dashboard recommended.yaml deployment — Official Kubernetes Dashboard installation method using kubectl apply with the recommended.yaml manifest from GitHub, which deploys all necessary components including Service, Deployment, and RBAC resources.
- Kubernetes Dashboard ServiceAccount token authentication — Authentication mechanism for Kubernetes Dashboard requiring a service account token, typically generated by creating a ClusterRoleBinding with cluster-admin permissions and extracting the token from a Secret.
- Kubernetes Dashboard token authentication — Authentication method for accessing Kubernetes Dashboard using ServiceAccount tokens, which can be generated from cluster secrets using kubectl describe secret commands.
- Kubernetes Dashboard Web UI — Official web-based user interface for Kubernetes that provides visual management of cluster resources, reducing reliance on kubectl commands and offering a GUI alternative for cluster administration.
- Kubernetes default namespace — The pre-configured namespace where Kubernetes resources are created when no specific namespace is specified in commands
- Kubernetes default resource injection — The automatic application of default CPU and memory requests/limits to containers that don't explicitly declare their own resource specifications, driven by LimitRange policies defined at the Namespace level.
- Kubernetes default scheduling policies — Four predicate categories for node filtering: GeneralPredicates (CPU/memory availability), Volume-related rules (persistent volume constraints), Node-related rules (taints, conditions), and Pod-related rules (affinity/anti-affinity). Scheduler uses 16 Goroutines to evaluate all nodes concurrently, then applies Priorities (0-10 scoring) with LeastRequestedPriority and BalancedResourceAllocation being key scoring algorithms.
- Kubernetes default scheduling strategies — Built-in scheduling policies including GeneralPredicates (resource filtering), volume-related rules, host constraints (taints), and pod affinity/anti-affinity rules that operate concurrently via Goroutines during node selection.
- Kubernetes Deployment — A Kubernetes controller that manages stateless applications and maintains a specified number of pod replicas, providing self-healing capabilities when pods are deleted or fail.
- Kubernetes deployment pipeline with Spinnaker — End-to-end workflow integrating Jenkins for image building, Harbor for registry, and Spinnaker for deploying containerized applications to Kubernetes with rolling updates and health checks.
- Kubernetes Deployment rollout and rollback — The version control mechanism for Deployments that tracks spec.template changes as revisions, enabling controlled rollouts and reverting to previous stable states when issues arise.
- Kubernetes Deployment Strategies — Overview of common deployment patterns in Kubernetes including recreate, rolling update, blue-green, canary, A/B testing, and shadow deployments, each with different trade-offs between downtime, resource requirements, and rollback capabilities.
- Kubernetes Deployments vs StatefulSets — Two Kubernetes workload controllers with different purposes: Deployments for stateless applications where pods are interchangeable, and StatefulSets for stateful applications like databases requiring stable network identities and persistent storage.
- Kubernetes development tools ecosystem — Collection of complementary tools extending Kubernetes functionality including minikube (local development), skaffold (development workflow), service mesh tools (istio, kiali), CI/CD (argocd), monitoring (prometheus, EFK), and build tools (buildah, kaniko, skopeo, dive).
- Kubernetes DNS配置管理 — DNS域名解析配置流程,包括在BIND9服务器上编辑/var/named/od.com.zone文件添加A记录、更新序列号、使用systemctl restart named重启服务,以及使用dig命令验证域名解析。
- Kubernetes ecosystem tools — Collection of complementary tools that extend Kubernetes functionality including minikube (local development), skaffold (development workflow), Istio (service mesh), ArgoCD (GitOps deployment), Prometheus (monitoring), and Kustomize (configuration management).
- Kubernetes Endpoints Object — A Kubernetes API object that contains the network addresses (IPs and ports) of backend pods or external services that a Service routes traffic to, allowing manual endpoint specification when using Services without selectors
- Kubernetes Enterprise Deployment Strategy — Production-focused deployment methodology for Kubernetes clusters including DNS setup with BIND9, certificate management, etcd cluster configuration, API server high-availability, and multi-layer load balancing.
- Kubernetes firewall port configuration — Network firewall rules required for Kubernetes cluster communication, including ports for API server (6443), etcd (2379-2380), and kubelet/kube-proxy services.
- Kubernetes Fundamentals — Core container orchestration platform covering essential building blocks (Pods, Services, Deployments, Ingress), resource management, and declarative configuration for automating deployment, scaling, and management of containerized applications.
- Kubernetes headless service — A Kubernetes Service with clusterIP set to None, which bypasses kube-proxy load balancing and returns DNS records pointing directly to pod IPs or endpoint addresses.
- Kubernetes health probes — Container health monitoring mechanisms using liveness, readiness, and startup probes with four detection handlers (Exec, TCPSocket, HTTPGet, gRPC) to determine container state and trigger recovery actions.
- Kubernetes high availability with keepalived and nginx reverse proxy — L4 reverse proxy architecture using nginx stream module and keepalived VRRP for API Server high availability, providing virtual IP (10.4.7.10:7443) with automatic failover between master nodes.
- Kubernetes horizontal scaling — The practice of adjusting the number of Pod replicas in a Deployment to handle varying loads, achievable through spec.replicas changes, kubectl scale commands, or direct editing.
- Kubernetes HPA-VPA incompatibility — The technical limitation preventing Horizontal Pod Autoscaler and Vertical Pod Autoscaler from working together when using the same metric type, as both modify conflicting pod attributes (HPA adjusts replica count while VPA modifies resource requests, requiring pod restarts).
- Kubernetes Ingress — A Kubernetes API object that manages external access to services in a cluster, typically through HTTP/HTTPS, providing routing rules and load balancing capabilities.
- Kubernetes Ingress configuration — Method of exposing HTTP/HTTPS routes to external traffic through domain name mapping and load balancer integration
- Kubernetes ingress controller — L7 HTTP/HTTPS reverse proxy and load balancer that exposes Kubernetes services to external traffic based on domain and URL path rules.
- Kubernetes Ingress for Layer 7 Routing — Kubernetes Ingress resource operating at HTTP/HTTPS layer 7 to enable fine-grained traffic routing based on domain and path, managed through Ingress Controller implementations like Nginx.
- Kubernetes ingress patterns — Exposing Kubernetes services to external traffic using Ingress controllers and Ingress resources, providing HTTP/HTTPS routing and acting as a unified entry point for multiple services.
- Kubernetes Ingress resource routing — Configuration of Ingress objects (myapp-ing) that define HTTP routing rules, mapping domain names (myapp.od.com) to backend services through the ingress controller.
- Kubernetes Ingress routing rules — Configuration blocks within Ingress specifications that define how traffic is routed from specific hosts and paths to backend services, including pathType matching strategies.
- Kubernetes interview preparation resource — A Bilibili video series featuring 29 intensive interview questions for Kubernetes positions at major tech companies, serving as preparation material for technical interviews.
- Kubernetes kubeadm cluster initialization — The process of bootstrapping a Kubernetes cluster using kubeadm, including pre-flight checks, certificate generation, control-plane component setup, and worker node joining procedures.
- Kubernetes labels and selectors — Key-value pairs attached to Kubernetes objects (Pods, Services, Deployments) that enable grouping and identification of related resources through selector queries for routing and management.
- Kubernetes Learning Approach — Philosophy emphasizing understanding fundamental concepts rather than following rote tutorials, given the complexity of modern backend development spanning multiple languages, frameworks, databases, and infrastructure needs.
- Kubernetes learning roadmap — A structured progression for learning Kubernetes from basic concepts through installation, core resources, storage, monitoring, and autoscaling
- Kubernetes LimitRange — A Kubernetes resource policy object that constrains resource allocation within a Namespace by defining minimum and maximum CPU/memory limits, default values, and storage constraints for Pods and Containers.
- Kubernetes load testing with curl — Technique for testing Kubernetes service availability and load distribution by running curl requests in a loop, useful for verifying load balancing across multiple pod versions
- Kubernetes log collection pipeline — The architecture and methodology for gathering, processing, and forwarding container and pod logs from Kubernetes clusters to centralized logging systems like Elasticsearch using Fluent Bit as the log forwarder.
- Kubernetes log storage management — Critical operational requirement to clean log files from host nodes or provision large remote storage volumes to prevent disk exhaustion and system crashes from accumulated log data.
- Kubernetes metrics data sources — Three categories of metrics in Kubernetes: host-level data via Node Exporter, component metrics from /metrics APIs (API Server, kubelet), and core Kubernetes object metrics (Pod, Node, container, Service) via Metrics Server.
- Kubernetes Metrics Server — A cluster-level data aggregator for Kubernetes that collects resource metrics (CPU, memory) from nodes and pods via kubelet and exposes them through the metrics.k8s.io API, storing data only in memory without persistence.
- Kubernetes metrics sources — Three categories of monitoring data in Kubernetes: host-level metrics via Node Exporter, component-level metrics from API Server and kubelet /metrics APIs, and core Kubernetes object metrics via Metrics Server.
- Kubernetes monitoring and logging ecosystem — Integration of Prometheus, Metrics Server, and log collection systems for observability within Kubernetes environments
- Kubernetes multi-cluster user management — Practice of using Contexts to segregate developer access across environments (production/development) and namespaces (frontend/backend) with scoped permissions per role
- Kubernetes name origin and k8s abbreviation — The name Kubernetes derives from Greek meaning 'helmsman' or 'pilot', while the k8s abbreviation refers to the eight letters between k and s in the word.
- Kubernetes Namespace — A virtual cluster mechanism that partitions a single physical Kubernetes cluster into multiple isolated abstract clusters for resource segregation across teams, projects, or business units.
- Kubernetes namespace isolation — Kubernetes namespaces like kubernetes-dashboard and kube-system provide resource isolation, allowing Dashboard and system components to run in separate logical environments within the same cluster.
- Kubernetes Namespace resource isolation — Namespaces provide a logical boundary for resource allocation policies in Kubernetes, allowing administrators to apply quota controls, default values, and constraints at the namespace level rather than per-container.
- Kubernetes namespace scoping — The namespace isolation mechanism in Kubernetes where operations default to the 'default' namespace unless otherwise specified, with --all-namespaces flag enabling cross-namespace queries and separate namespace isolation for resources.
- Kubernetes namespace-based environment isolation — Using separate Kubernetes namespaces (test, prod, infra) to isolate different deployment environments, with dedicated ConfigMaps, Secrets, Ingress rules, and Service resources for each environment.
- Kubernetes Namespace-based resource isolation — The practice of using Kubernetes Namespaces as boundaries for resource allocation policies, enabling multi-tenant resource segregation and per-team resource management.
- Kubernetes Namespaces — Logical partitioning mechanism within Kubernetes clusters that divides cluster resources between multiple users, teams, or applications, providing scope for resource names and isolation.
- Kubernetes NetworkPolicy — A Kubernetes specification for controlling network traffic flow between pods and network endpoints, supporting ingress/egress rules, IP CIDR blocks, and namespace-based restrictions
- Kubernetes node joining process — Using kubeadm join commands with bootstrap tokens and CA cert hashes to add worker nodes or additional control-plane nodes to an existing cluster.
- Kubernetes Operator pattern — A flexible, programmatic approach to managing stateful applications using Custom Resource Definitions (CRD) to describe desired application state and custom controllers to automate deployment and maintenance operations.
- Kubernetes overview — An open-source platform for managing containerized workloads and services that provides declarative configuration and automation, originally developed by Google and donated to CNCF in 2014.
- Kubernetes PersistentVolumeClaims (PVC) — A user's request for storage resources, specifying size and access modes, which continuously searches for matching PVs to bind with until a suitable volume is found.
- Kubernetes PersistentVolumes (PV) — An abstract storage resource in Kubernetes with a lifecycle independent of Pods, providing storage capacity and access modes that can be statically or dynamically provisioned.
- Kubernetes Pod — The smallest deployable unit in Kubernetes that encapsulates one or more containers, running on a Node and serving as the core abstraction around which all Kubernetes operations revolve.
- Kubernetes Pod admission constraints — Validation rules that prevent Pod creation when resource specifications fall outside defined boundaries—for example, rejecting Pods with CPU requests below minimum limits or exceeding maximum allowed memory quotas.
- Kubernetes Pod annotations for Prometheus scrape configuration — Mechanism for enabling and configuring Prometheus metrics scraping on pods using annotations like prometheus_io_scrape, prometheus_io_port, prometheus_io_path, blackbox_scheme, blackbox_port, blackbox_path for service discovery and probe configuration
- Kubernetes pod command execution — Using
kubectl execto run commands interactively inside a running pod's container for debugging, testing, and service invocation. - Kubernetes pod conditions — Status indicators that report the readiness state of a pod including Initialized, Ready, ContainersReady, and PodScheduled
- Kubernetes Pod Disruption Budget (PDB) — A Kubernetes resource that limits the number of pods in a replicated application that can be down simultaneously during voluntary disruptions like node maintenance or upgrades
- Kubernetes pod forwarding comparison — Comparison between port-forwarding (temporary, direct Pod access) and Service (persistent, abstracted access) in Kubernetes, highlighting how Services decouple exposure logic from ephemeral Pod lifecycle.
- Kubernetes Pod fundamentals — Core Kubernetes abstraction including Pod states, important configuration fields, horizontal scaling, rolling upgrades, and the Pod lifecycle model
- Kubernetes pod inspection — Basic techniques for examining Kubernetes pods and cluster resources using kubectl get commands, including listing pods with kubectl get pods and viewing all resources with detailed output using kubectl get all -owide
- Kubernetes pod labeling — The practice of attaching key-value metadata labels to pods, which services use as selectors to identify and route traffic to the correct pod subsets.
- Kubernetes pod lifecycle events — State transitions and recorded events during pod execution including scheduling, image pulling, container creation, and startup, visible via kubectl describe
- Kubernetes Pod lifecycle phases — The five high-level states (Pending, Running, Succeeded, Failed, Unknown) that represent the overall status of a Pod throughout its existence and are displayed in kubectl output
- Kubernetes pod naming convention — Kubernetes automatically generates pod names with a random suffix (e.g., mysql-dp-8dfb795cf-2hkgm) consisting of the deployment name, a unique hash, and a replica identifier, which must be specified when targeting specific pods.
- Kubernetes pod port-forwarding — A kubectl command that forwards local network ports to a Pod's ports, enabling direct access to containerized services for debugging and verification purposes, as demonstrated accessing nginx through localhost:8080.
- Kubernetes pod priority and preemption — Mechanism using PriorityClass objects to assign integer priorities (up to 1 billion) to pods, enabling high-priority pods to preempt low-priority pods by triggering eviction when scheduling fails.
- Kubernetes pod templates — Configurable pod template definitions in Jenkins that specify container configurations for build agents spawned in Kubernetes, including resources, images, and security settings.
- Kubernetes pod verification — Testing methodology to validate containerized applications are running correctly by checking pod status and using kubectl exec to test service connectivity from within the cluster.
- Kubernetes Port Forwarding for Dashboard Access — Post-deployment access method using kubectl port-forward to expose the Kubernetes Dashboard locally, enabling secure HTTPS access through localhost tunneling to the dashboard pod.
- Kubernetes pre-installation system prerequisites — Mandatory system configuration requirements before Kubernetes installation: disabling SELinux, turning off swap, configuring bridge netfilter for iptables, and setting unique hostnames across nodes.
- Kubernetes Prerequisite Packages — Essential Linux packages required for Kubernetes installation including ebtables, socat, ipset, and conntrack, which handle network bridging, connection tracking, and firewall functionality for container networking.
- Kubernetes Priority and Preemption — Mechanism for high-priority pods to preempt low-priority pods during scheduling failures. Uses PriorityClass objects (value: 1-1000000000, globalDefault flag) to assign pod priorities. High-priority pods enter scheduling queue earlier; if scheduling fails, scheduler attempts to evict lower-priority pods from nodes to accommodate them.
- Kubernetes provider model — The architectural pattern where infrastructure tools implement provider components that translate language-specific API calls into platform-specific configurations for different Kubernetes distributions and managed services
- Kubernetes PV and PVC — The relationship between PersistentVolume (PV) and PersistentVolumeClaim (PVC), including their binding conditions, specification matching, and the separation of storage provisioning (PV) from consumption (PVC)
- Kubernetes PV PVC storage management — PersistentVolume (PV) and PersistentVolumeClaim (PVC) are Kubernetes resources for managing persistent storage in containerized applications, separating storage provisioning from consumption.
- Kubernetes QoS (Quality of Service) classes — Three-tier pod classification: Guaranteed (requests = limits for all containers), Burstable (at least one container has requests set but not all equal to limits), and BestEffort (no requests or limits). QoS determines eviction priority during resource pressure: BestEffort pods are killed first, then Burstable, then Guaranteed.
- Kubernetes QoS BestEffort class — Lowest-priority Pod QoS classification where no containers set any resource requests or limits; these Pods are the first candidates for eviction when system resources are constrained
- Kubernetes QoS Burstable class — Intermediate Pod QoS class for Pods that are not Guaranteed but have at least one container with memory or CPU request set; provides minimum resource guarantees but allows resource usage bursts above requests when capacity available
- Kubernetes QoS classes — Three service quality tiers—Guaranteed, Burstable, and BestEffort—that Kubernetes assigns to Pods based on their request/limit configurations to determine scheduling priority and eviction behavior during resource pressure.
- Kubernetes QoS Guaranteed class — Highest-priority Pod QoS classification where all containers have equal CPU and memory requests and limits (request.memory = limit.memory, request.cpu = limit.cpu); these Pods are never killed or throttled unless exceeding their own limits when no lower-priority Pods remain
- Kubernetes RBAC — Role-Based Access Control authorization mechanism in Kubernetes that regulates API access through permissions bound to roles and subjects, implementing the principle of least privilege for users and service accounts.
- Kubernetes RBAC (Role-Based Access Control) — Kubernetes authorization mechanism that regulates access to resources through ServiceAccounts, Roles, and bindings, with granular permissions for reading secrets and controlling access
- Kubernetes ReplicaSet — A Kubernetes resource that ensures a specified number of Pod replicas are running at any given time, typically managed indirectly through Deployments rather than used standalone.
- Kubernetes Request and Limit — Resource specification parameters where Request defines minimum resources required for pod scheduling and Limit defines maximum resources a container can consume, with constraints 0 <= request <= Node Allocatable and request <= limit <= Infinity.
- Kubernetes requests and limits — Two-tier resource specification: requests (used by kube-scheduler for scheduling decisions) and limits (used by kubelet for Cgroups enforcement). This Borg-inspired approach allows users to declare smaller requests for scheduling while setting larger limits for actual resource constraints.
- Kubernetes resource constraints and relationships — Mathematical relationships governing Kubernetes resource allocation: 0 <= request <= Node Allocatable for scheduling, and request <= limit <= Infinity for runtime usage
- Kubernetes resource Limit — Maximum resource value a container can consume, where limit of 0 means unlimited usage; must satisfy constraint request <= limit <= Infinity
- Kubernetes resource management — Techniques for managing and monitoring cluster resources including Request/Limit, Namespace, LimitRange, and Metrics-Server for resource allocation and observability
- Kubernetes resource manifest structure — YAML configuration files defining Kubernetes resources including apiVersion, kind, metadata (name, labels), and spec (container configurations), serving as declarative specifications for desired cluster state.
- Kubernetes Resource Manifests — YAML configuration files defining Kubernetes resources including ServiceAccount, ClusterRoleBinding for RBAC, Deployment for pod management, Service for network exposure, and Ingress for external routing.
- Kubernetes Resource Model — Core framework for managing compute resources through requests (scheduling baseline), limits (cgroups enforcement), and the distinction between compressible (CPU) and incompressible (memory) resources.
- Kubernetes resource provisioning — Kubernetes automatically creates and manages compute resources based on deployed container image specifications
- Kubernetes resource quota and namespace management — Using ResourceQuota and LimitRange policies within namespaces to allocate and constrain compute resources (CPU, memory) across teams or projects
- Kubernetes resource Request — Minimum resource requirement for containers used as scheduling criteria; Pods only scheduled to nodes with allocable resources >= request value, bounded by formula 0 <= request <= Node Allocatable
- Kubernetes resource requests and limits — The dual resource management mechanism in Kubernetes where requests specify minimum guaranteed resources for scheduling decisions and limits cap maximum resource consumption per container.
- Kubernetes resource types — The various resource types manageable through kubectl including pods, services (svc), deployments, replicationcontrollers, and nodes, with case-insensitive naming supporting singular, plural, and abbreviated forms (po, pods, pod).
- Kubernetes scheduling mechanism — How Kubernetes assigns workloads to nodes, including resource models, default scheduling policies, priority mechanisms, and preemption strategies
- Kubernetes scheduling predicates and priorities — Two-phase scheduling algorithm where predicates filter nodes by hard constraints (resources, volumes, taints, affinity) and priorities score remaining nodes 0-10 to select optimal placement using strategies like LeastRequestedPriority.
- Kubernetes Secret — A Kubernetes resource for storing and managing sensitive data such as API keys, passwords, and tokens, similar to ConfigMap but designed to handle credentials with base64 encoding.
- Kubernetes Secret Extraction — The kubectl technique using jsonpath and base64 decoding to retrieve sensitive data stored in Kubernetes secrets, demonstrated with ArgoCD admin credentials.
- Kubernetes Secret security limitations — Native Kubernetes Secrets use base64 encoding (not encryption), making them essentially plaintext; they require additional security measures like etcd encryption, strict RBAC policies, and external KMS solutions for production environments.
- Kubernetes Secrets — A specialized Kubernetes object for storing sensitive data like credentials and certificates, with data stored in base64 encoding and similar volume-mounting capabilities to ConfigMap
- Kubernetes Service — A Kubernetes abstraction that defines how groups of Pods are accessed and connected, providing stable networking endpoints despite dynamic Pod lifecycle through label-based selection.
- Kubernetes Service Account — Namespace-scoped identity resource for Pods to authenticate with the API server, automatically created as 'default' in each namespace with a token for in-cluster applications to use for authentication.
- Kubernetes Service Account and ClusterRoleBinding — The RBAC mechanism for granting permissions to service accounts through cluster role bindings, demonstrated via the admin-user configuration for Dashboard access
- Kubernetes Service Delivery Workflow — A standardized four-step process for deploying services on Kubernetes: prepare container images, prepare resource manifests, resolve DNS domains (if using ingress), and apply manifests.
- Kubernetes service discovery — The mechanism by which pods within a cluster can locate and communicate with services using DNS names in the format service-name.namespace.svc or service-name:port.
- Kubernetes service discovery with CoreDNS — DNS-based service discovery mechanism in Kubernetes clusters that maps service names to cluster IPs, enabling dynamic service resolution through DNS queries
- Kubernetes service exposure methods — Three primary Service types for exposing applications: ClusterIP (internal cluster access), NodePort (exposes on each node's IP at static port), and LoadBalancer (cloud provider external load balancer), with Ingress providing HTTP/HTTPS layer-7 routing.
- Kubernetes Service external access methods — Three approaches for exposing Kubernetes Services externally: NodePort (host port mapping with SNAT), LoadBalancer (cloud provider integration), and ExternalName (DNS CNAME aliasing)
- Kubernetes Service implementation modes — Service discovery mechanism implemented via kube-proxy with iptables (rules scale poorly) or IPVS (kernel-space load balancing, recommended for large clusters), providing ClusterIP, NodePort, LoadBalancer, and ExternalName types
- Kubernetes Service Port Management — The challenge of managing multiple external port numbers when exposing services directly via NodePort or LoadBalancer, which Ingress solves by providing unified port 80/443 access.
- Kubernetes service port-forwarding — Method of exposing Kubernetes services locally using kubectl port-forward, including binding to all interfaces with --address 0.0.0.0 and mapping local ports to service ports
- Kubernetes Service types — Three service networking types in Kubernetes: LoadBalancer for external load balancer integration, ClusterIP for internal cluster access (default), and NodePort for exposing services on each node's IP at a static port.
- Kubernetes Service types (LoadBalancer and NodePort) — Service configuration types in Kubernetes that determine how services are exposed externally: LoadBalancer provisions cloud provider load balancers, while NodePort exposes the service on each node's IP at a static port.
- Kubernetes SIG CLI — The Kubernetes Special Interest Group responsible for command-line interface tools, including the maintenance and development of Kustomize as part of the Kubernetes ecosystem.
- Kubernetes Smooth Upgrade Procedure — A zero-d downtime node upgrade technique involving node drain, pod migration to other nodes, binary replacement using symbolic links, and supervisor-managed service restart without server shutdown.
- Kubernetes StatefulSet — A Kubernetes workload API object used for managing stateful applications, providing stable network identifiers, persistent storage, and ordered deployment strategies
- Kubernetes Storage Management — Persistent data handling through PersistentVolumes (PV) and PersistentVolumeClaims (PVC) with static and dynamic provisioning, storage classes, access modes (ReadWriteOnce, ReadWriteMany), and volume types (emptyDir, hostPath, NFS, cloud storage).
- Kubernetes three brothers — The conceptual grouping of Pod, Service, and Deployment as the core Kubernetes resource types that work together to implement advanced operations like load balancing, rolling updates, security, and monitoring in containerized applications.
- Kubernetes three brothers architecture — A conceptual framework describing Pod, Service, and Deployment as the three core Kubernetes resource types that work together to implement advanced operations like load balancing, rolling updates, security, and monitoring in containerized applications.
- Kubernetes version rollout verification — Process of validating gradual deployment of application versions (v1, v2) across Kubernetes pods through repeated HTTP requests, demonstrating traffic distribution during rolling updates
- Kubernetes Volume — Kubernetes Volumes are directories that store data, accessible to containers in Pods. Unlike Docker volumes, K8s Volumes have lifecycle concepts, supporting both ephemeral volumes (tied to Pod lifecycle) and persistent volumes (outlasting Pods). They are defined in .spec.volumes and mounted via .spec.containers[*].volumeMounts.
- Kubernetes Volume types — Different storage mechanisms in Kubernetes including EmptyDir, ConfigMap, Secret, and Persistent Volume/Persistent Volume Claim (PV & PVC) for managing data persistence
- Kubernetes Worker Node Components — The infrastructure components that run application workloads, including Node (physical/virtual host), kubelet (container lifecycle agent), kube-proxy (network proxy for service discovery and load balancing), and container runtime engine (e.g., Docker, CRI-O).
- Kubernetes workload registrar — A SPIRE component that automatically watches the Kubernetes API server and creates workload registrations in SPIRE corresponding to pods that match certain selectors, eliminating manual workload registration.
- Kubernetes YAML manifest structure — The standard document format for Kubernetes resources, containing apiVersion, kind, metadata, and spec sections that define resource configuration
- Kubernetes 本地集群安裝教學 — 以 Docker Desktop for macOS 為基礎的 Kubernetes 本地集群安裝步驟,包括下載 Docker Desktop、啟用 Kubernetes 功能以及驗證集群狀態。
- Kubernetes 版本更新與 API 棄用速度 — Kubernetes 的發布週期約為三個月一次小更新,API 棄用速度快,網路上的舊教學可能過時,需查閱最新文件或源碼進行轉換。
- Kubernetes-based container building — The approach of building container images directly within Kubernetes pods rather than using external Docker daemons, enabling containerized build pipelines.
- Kubernetes-based PaaS Platform Architecture — A comprehensive platform-as-a-service built on Kubernetes that integrates CI/CD pipelines, configuration management, monitoring, and automated deployment capabilities for containerized applications.
- Kubernetes-native build — A pattern of implementing build and compilation processes directly within Kubernetes infrastructure using custom resources and controllers rather than external CI/CD systems.
- kubernetes-recreate-deployment-strategy — Kubernetes deployment strategy where all existing Pods are terminated before new Pods are created, causing service downtime during the update window but providing simple implementation.
- kubernetes-rolling-update-deployment-strategy — Kubernetes default deployment strategy that gradually replaces old Pods with new ones, maintaining service availability by creating new Pods before terminating old ones based on configurable parameters.
- KubeSphere Installation Verification — Post-installation validation method using kubectl to check installation logs from the ks-install pod within the kubesphere-system namespace to confirm all components started successfully.
- KubeSphere Web Console — Web-based management interface for KubeSphere accessed via HTTP (default port 30880), providing cluster administration, service monitoring, and component management capabilities with default admin credentials.
- Kustomize — A Kubernetes configuration customization tool that enables YAML overlay and modification through a declarative, template-free approach to managing manifests across different environments.
- KV cache quantization — Memory optimization technique configurable via kv_bits and kv_group_size parameters, applied during prefill phase to reduce memory footprint at the cost of bypassing max_kv_size limits.
- Label-based deployment isolation — Using version labels (version: v1, version: v2) on Deployments and Services to maintain multiple coexisting versions and enable traffic segregation
- label-based selective deployment — The practice of deploying specific versions or components of a Kubernetes application using label selectors (e.g., -l version=v1, -l service=helloworld) to apply only matching resources from a YAML file.
- Label-based Traffic Control in Kubernetes — Using Kubernetes Service selector field to route traffic to specific pod versions by matching pod labels, enabling instant traffic switching between application versions without modifying pods themselves.
- Language-specific container debugging — Platform-specific debugging configurations for Node.js, .NET Core, and Go applications running in Docker containers, leveraging language-specific debuggers and container-aware tooling.
- Large file sharing solutions — Online platforms designed to transfer files that exceed email attachment size limits, often using cloud storage infrastructure.
- Learning routine and fixed practice patterns — Creating standardized pre-practice rituals and workflows (similar to basketball free-throw routines) to systematize skill training, reduce cognitive load, and enable focus on key improvement areas.
- Leiden聚类算法 — 基于图拓扑的社区发现算法,无需embedding/vector DB即可在知识图谱中发现节点群组,用于自动组织相关概念和代码结构
- Let's Encrypt SSL Certificates — Free, automated SSL/TLS certificate authority providing domain validation through ACME protocol challenges (http-01) served via .well-known/acme-challenge/ paths.
- Link pages — Organizational structures in Zettelkasten including index pages (Epic/總目錄) for reference and link pages that connect related notes into coherent thematic pathways.
- Link pages and hub pages — Organizational structures in Zettelkasten including index pages (Epic/總目錄) for reference and link pages that connect related notes into coherent thematic pathways.
- Link-based classification — An organizational approach that categorizes and navigates information through direct links between notes rather than hierarchical folder structures.
- Link-based organization — An organizational approach that categorizes and navigates information primarily through direct links between notes rather than hierarchical folder structures, enabling non-linear knowledge discovery
- Linux bridge management with brctl — Essential commands for creating and managing Linux network bridges using the brctl utility, including adding/removing bridge devices and attaching physical network interfaces.
- Linux Cgroups (Control Groups) — Kernel feature for constraining and controlling process group resources (CPU, memory, disk I/O) that works alongside namespaces to implement container resource limits
- Linux DHCP network configuration — Method for configuring dynamic IP addresses on Linux systems using DHCP protocol, requiring fewer configuration parameters than static IP setup and allowing automatic IP assignment from a DHCP server.
- Linux GUI applications on Windows — The capability to run Linux graphical applications (such as gedit, GNOME tools) natively on Windows through WSLg without X11 server configuration
- Linux hostname management — Procedures for checking and modifying system hostnames using hostnamectl and the /etc/hosts file on CentOS 7 systems
- Linux Namespace isolation — Kernel mechanism that modifies process view to create isolated environments for containers, including PID, Network, Mount, UTS, IPC, and User namespaces
- Linux static IP network configuration — Method for configuring static IP addresses on Linux systems using network configuration files, setting parameters such as BOOTPROTO, IPADDR, GATEWAY, NETMASK, DNS, and network device properties.
- List pods across all namespaces — Using 'kubectl get pods -A' to list all pods across all namespaces in a Kubernetes cluster, providing visibility into the entire cluster's workload distribution.
- Listing Kubernetes pods across all namespaces — Using kubectl get pods -A to list all pods running across all namespaces in a Kubernetes cluster, useful for debugging and locating pod names.
- Literature notes — Reading and research notes that capture insights from external sources like books, articles, and videos, serving as raw material for creating permanent notes in a knowledge management system
- Live2D and VRM model rendering — 3D character visualization using Three.js with TresJS integration for both Live2D (2D) and VRM (3D) avatar formats
- llama.cpp GGUF IQ4NL format — Optimized model format combination for efficient local inference requiring latest llama.cpp runtime with unsloth-converted quantized weights
- LLM-based compression — Token compression approach using OpenAI API with context-aware compression/decompression prompts, achieving highest compression rates (40-58%) at the cost of API dependency and slower processing (~2s per request).
- LoadBalancer service EXTERNAL-IP pending state — The
status in EXTERNAL-IP field indicates Kubernetes cluster cannot configure a load balancer, typically in environments without LoadBalancer service support - LoadBalancer service type in Kubernetes — Kubernetes Service configuration that exposes applications externally using cloud provider load balancers or node ports for external traffic access
- loadgen.sh load generation script — Parallel load generation script used to simulate traffic for testing autoscaling behavior in Istio sample applications.
- Local AI vs cloud API latency comparison — Local deployment provides millisecond-scale response times versus cloud API network delays and rate limiting, critical for AI Agent workflows requiring dozens to hundreds of iterative self-correction loops
- Local development containerization — The practice of building and testing containerized applications locally before deploying to container orchestration platforms
- Local development DNS simulation — Practice of simulating production-like domain names in local development environments through hosts file manipulation, eliminating the need for actual DNS infrastructure during development.
- Local Development Environment Setup — Comprehensive development environment configuration including WSL2, Docker Desktop, package managers (Chocolatey, Scoop), IDE setup (VSCode, IntelliJ), and toolchain management for cross-platform development workflows.
- Local domain configuration via hosts file — Mapping custom domain names to localhost IP addresses (127.0.0.1) in /etc/hosts file to enable local HTTPS development with realistic domain URLs.
- Local Ingress testing with port-forwarding — Testing ingress configurations locally by using kubectl port-forward to expose the ingress controller service on localhost, enabling access to routed services through configured hostnames.
- Local Kubernetes cluster setup workflow — Standard process for setting up local Kubernetes: install Docker Desktop for Mac, enable Kubernetes in settings, wait for component initialization, and verify cluster status using kubectl commands (cluster-info, get nodes, version).
- Local Kubernetes development workflow — Development practice using tools like Skaffold, minikube, and kind to create automated build-to-deploy pipelines for Kubernetes applications on local machines
- LocalStack Docker Network Setup — Configuration for creating and using a shared Docker network to enable communication between LocalStack container and other containers like AWS CLI
- LocalStack Terraform Backend Configuration — Configuration pattern for using Terraform with LocalStack's simulated AWS services (S3 for state storage and DynamoDB for state locking) instead of actual AWS infrastructure
- LOCATE function for index-friendly pattern matching — Using the LOCATE() function instead of LIKE can enable index usage for substring searches, returning the position of a keyword within a field.
- Lock-based concurrency control — A concurrency control mechanism that uses locks to coordinate access to shared data, preventing conflicts and maintaining transaction isolation.
- lockstep version management — Monorepo 中所有包强制共享同一版本号,避免依赖地狱和版本不一致问题,采用统一的发布策略管理包版本。
- Log piping to Netcat — Technique for redirecting application output or log files to Netcat for network transmission, enabling remote logging or real-time data streaming
- Log4j2 — A Java logging framework that supports multiple appenders, rolling file policies, and XML-based configuration for flexible log management across different components.
- Log4j2 Multiple Appenders Configuration — Pattern for configuring multiple named loggers with dedicated file appenders in Log4j2, enabling segregated logging streams for different components (e.g., main, bolt, spout) while maintaining console output.
- Log4j2 PatternLayout — Pattern syntax for defining log output format with placeholders for level, timestamp, thread, class, and message using customizable layout patterns.
- Log4j2 XML configuration structure — Hierarchical XML format organizing Properties, Appenders (Console, RollingFile), and Loggers sections with monitorInterval for runtime reconfiguration.
- Logger additivity setting — Configuration attribute (additivity="false") that prevents log events from propagating to parent loggers, enabling isolated logging for specific components.
- logging-agent (DaemonSet) pattern — Node-level log collection approach where a logging agent runs as DaemonSet, mounts container log directories, and forwards stdout/stderr output to backend storage with minimal application intrusion.
- Logstash configuration — Pipeline configuration with Kafka input plugin consuming from topics (k8s-fb-test-., k8s-fb-prod-.), JSON filter for parsing log messages, and Elasticsearch output plugin creating time-based indices (k8s-$ENV-%{+YYYY.MM.DD}).
- Long-context model tuning parameters — Configuration settings for preventing analysis paralysis in large context scenarios: temperature=1, top-p=0.9, min-p=0.1, top-k=20, repeat-penalty=1.05, with visual tasks requiring specific token ranges
- LongAccumulator and DoubleAccumulator — General-purpose concurrent accumulator classes in Java that accept custom binary operations (like Long::sum) as reducer functions, with LongAdder/DoubleAdder being specialized implementations for addition.
- LongAdder and DoubleAdder — High-performance concurrent accumulator classes in Java that use Striped64 internally for efficient counting operations under high contention.
- Loose Coupling via Registry Pattern — Extension mechanism where optional subsystems (MCP, plugins, memory providers) self-register at import time rather than being hard dependencies, with check_fn gating for availability
- manifest.json Chrome Extension Configuration — The manifest.json file defines the Chrome extension's configuration including manifest version, name, description, version number, icons, browser action settings, permissions, and content script injection rules.
- MANIFEST.MF agent configuration — The manifest file configuration required for Java Agent JAR files, specifying the Premain-Class entry point and capabilities like Can-Redefine-Classes.
- MANIFEST.MF configuration — The manifest file in Java Agent JARs that specifies the Premain-Class entry point and capabilities like Can-Redefine-Classes for bytecode modification.
- Manual plugin relocation for portability — The technique of moving existing Eclipse plugins from their installation directories to the dropins folder to enable portable deployment and manual management without relying on Eclipse's internal update mechanisms.
- Manual sidecar injection with istioctl — A technique using
istioctl kube-injectto modify Kubernetes manifests and add Istio proxy sidecars before deployment, used when automatic injection is disabled. - Map of Contents (MOC) — An organizational hub page structure that aggregates related documentation links into a single navigational index, serving as an entry point for broader thematic topics
- Map of Contents (MOC) for Docker — A navigational hub page organizing Docker-related documentation through a structured index linking to core Docker topics including fundamentals, networking (bridge), and database deployments (MySQL).
- map-of-content-moc — A navigational hub page structure in personal knowledge management systems that organizes related documentation links into a hierarchical index, serving as an entry point for exploring thematic topics.
- Markdown checkbox task tracking — Using Markdown syntax (- [ ]) to create interactive task lists that can track completion status while integrating directly with note-taking and documentation systems.
- Markdown format copying — The capability to directly copy browser tab information in Markdown format, enabling seamless integration into note-taking systems like Obsidian, Roam Research, or other Markdown-based documentation tools.
- Markdown table conversion for HTML/Word — Methods and tools for converting tables from HTML or Word format into Markdown table syntax, particularly for use with Obsidian
- Markdown table syntax — Standard table formatting using pipe characters and hyphens to create structured rows and columns in Markdown documents
- Markdown TODO list management — The practice of maintaining task lists and work-in-progress tracking using markdown checkbox syntax, often with metadata like tags and dates.
- Maven local JAR dependency installation — Technique for adding third-party JAR files to a local Maven repository using the install:install-file command, enabling dependency management for libraries not available in public repositories
- Maven metadata.xml structure — Standard Maven metadata file (maven-metadata.xml) that provides version and artifact information for repository browsing and dependency resolution.
- Maven properties configuration — Standard Maven properties for defining project-level settings including source encoding, reporting output encoding, Java compiler version, and web application build behavior.
- Maven repository publishing workflow — Automated process using GitHub Actions to build and publish Maven jar packages to GitHub Packages registry.
- Maven resources filtering and inclusion/exclusion — Build configuration for controlling which resource files are included in the final artifact, using includes/excludes patterns and filtering capabilities
- Maven system scope dependency — Maven dependency configuration using scope=system and systemPath to directly reference local JAR files within the project structure, bypassing repository requirements
- maxsurge-deployment-parameter — RollingUpdate configuration parameter defining the maximum number of Pods that can be created above the desired replica count during a deployment update.
- maxunavailable-deployment-parameter — RollingUpdate configuration parameter specifying the maximum number of Pods that can be unavailable during the update process, providing control over service disruption tolerance.
- MCP Server模式 — Graphify可作为MCP(Model Context Protocol)服务器运行,暴露query_graph、get_node、shortest_path等工具接口,供AI编程助手查询知识图谱
- Memory hierarchy performance — The performance relationship between different storage tiers, where network access is approximately 100x slower than memory access, and hard drive seek time is comparable to reading 1MB of data
- Mental models in learning — Cognitive frameworks that help understand how things work in the world; identifying patterns and mental models during early research facilitates skill acquisition by enabling better prediction and understanding of complex concepts through analogies.
- Mermaid diagrams in Obsidian — Creating and embedding various types of diagrams (flowcharts, sequence diagrams, Gantt charts) directly in Obsidian notes using Mermaid syntax within code blocks.
- Mermaid flowchart syntax — Text-based markup language for creating flowcharts with directional layouts (LR for horizontal, TD for vertical), supporting various node shapes (square, rounded, diamond) and conditional branching.
- Mermaid Gantt charts — Time-based project planning diagrams showing tasks, durations, dependencies, and completion status using section-based organization and date/duration specifications.
- Mermaid sequence diagrams — Diagram type for representing interactions between entities over time, supporting message passing, loops, notes, and different arrow types for synchronous/asynchronous communication.
- Message Acknowledgment — A reliability mechanism in message queuing where consumers explicitly confirm successful processing of messages, enabling guaranteed delivery and preventing message loss during failures.
- Message Authentication Code (MAC) — Cryptographic checksum mechanism for verifying data integrity and authenticity of messages, part of the JCA/JCE security services.
- Message Queue — A fundamental component of asynchronous system architecture that enables temporary storage and deferred processing of messages between distributed applications, decpling senders from receivers.
- META-INF/services convention — Directory-based configuration pattern where service provider implementation files are placed in META-INF/services/ with filenames matching the fully-qualified interface name, containing implementation class names
- META-INF/services service provider mechanism — A Java service loader convention where JAR files declare implementations in META-INF/services/fully.qualified.InterfaceName files, enabling runtime discovery and instantiation of implementations without manual configuration.
- MetalLB IP address allocation strategy — IP address configuration method for load balancer services where the 3rd octet is user-configurable (100-255 range) while the first two octets derive from Docker's subnet and the 4th octet is fixed at 200-240, providing 40 possible public IPv4 addresses per cluster
- Metrics — Quantitative numerical measurements used in monitoring to track system performance, resource utilization, and operational behavior over time.
- Metrics Server — A Kubernetes extension capability that provides core monitoring metrics for Pods, Nodes, containers, and Services, designed to replace Heapster as the standard metrics provider.
- metrics.k8s.io API — Kubernetes API extension provided by Metrics Server that exposes resource metrics through /apis/metrics.k8s.io, registered as an APIService resource to integrate with the Kubernetes API aggregation layer.
- Micrometer Prometheus integration — Java/Spring Boot application instrumentation using Micrometer library with Prometheus registry to expose application metrics for monitoring
- Microservices architecture — A software architectural pattern that structures applications as a collection of loosely coupled, independently deployable services, each running in its own process and communicating through lightweight mechanisms.
- Microservices Architecture Patterns — Distributed system design approach structuring applications as loosely coupled, independently deployable services communicating through lightweight mechanisms, with patterns for service discovery, configuration management, circuit breakers, and distributed tracing.
- Microservices event-driven communication — Architectural pattern where microservices communicate asynchronously through events using message brokers like NATS Streaming Server rather than direct HTTP requests, enabling loose coupling between services.
- Microservices in Payment Systems — The application of microservice architecture patterns to payment processing platforms, enabling scalability, independent deployment of payment components, and separation of concerns across different payment functions.
- Microservices learning path: Node.js and React — A comprehensive educational course covering microservices architecture using Node.js for backend services, React for frontend interfaces, with Docker and Kubernetes for containerization and orchestration.
- MIME type detection from byte arrays — Technique for detecting file MIME types from raw byte data using URLConnection.guessContentTypeFromStream() for validation before processing
- Mind mapping for technology outlines — Using mind maps to organize and visualize the structural outline of a technology, helping to establish mental models and identify key components before detailed study.
- Mineflayer Minecraft integration — AI agent capability to connect to and play Minecraft servers programmatically through the Mineflayer JavaScript library
- Minikube — A tool that runs a local Kubernetes cluster on a single machine, supporting multiple drivers (Docker, VirtualBox, etc.) for development and testing environments.
- Minikube addon system — Extensible addon architecture for Minikube clusters that enables optional Kubernetes components like ingress, dashboard, metrics-server, and storage-provisioning tools.
- minikube addons system — Extensible plugin architecture for minikube that allows easy installation of additional Kubernetes services and functionality through the addons catalog.
- Minikube cluster lifecycle management — The operational procedures for managing Kubernetes clusters locally, including starting, stopping, pausing, deleting clusters and configuring resource allocation.
- Minikube cluster verification and status — Commands and procedures for verifying Minikube cluster health, including minikube status, kubectl cluster-info, and node readiness checks.
- minikube configuration management — Commands for setting cluster parameters such as driver, CPU, memory allocation, and viewing current configuration, with changes requiring a restart to take effect.
- minikube deployment and service exposure — Workflow patterns for deploying applications to minikube using kubectl, creating NodePort services, and accessing them through minikube service tunneling or port forwarding.
- Minikube Docker Driver Configuration — Configuration and startup parameters for running Minikube with Docker as the container driver, including resource allocation (CPUs, memory), CNI selection, and addon management
- Minikube installation on WSL Ubuntu — Step-by-step procedure for installing Minikube and kubectl on Windows Subsystem for Linux running Ubuntu 20.04 LTS, including system updates, dependency installation, and binary configuration.
- MIT (Most Important Task) 单点聚焦策略 — Writing only one MIT (Most Important Task) daily ensures 100% completion rate, which triggers the brain's 'I won' reward signal. Multiple tasks with partial completion send 'I lost' signals and break the feedback loop.
- MLM-based compression — Compression method using RoBERTa masked language models to identify and remove the most predictable tokens (top-k selection), achieving 20-30% compression with free offline operation requiring ~500MB model storage.
- MLX Engine architecture — LM Studio's dual-path inference engine architecture with ModelKit for text/vision models with add-ons and VisionModelKit for generic vision models via mlx-vlm wrapper.
- MLX Engine Python API — Public API providing load_model(), create_generator(), and tokenize() functions for running LLM inference with support for streaming generation, structured output via JSON schemas, stop strings, and temperature/top-p sampling.
- MLX unified memory architecture — Apple Silicon's MLX engine leverages unified memory architecture, allowing efficient GPU utilization and enabling large models to run by using system RAM supplemented by swap when necessary.
- Mobile app network debugging — Practice of monitoring and analyzing HTTP/HTTPS traffic from mobile applications to debug API calls, inspect data payloads, and troubleshoot connectivity issues
- Mobile Device Proxy Configuration — Network configuration on mobile devices to route traffic through a debugging proxy server, enabling packet capture and analysis of mobile app traffic.
- MOC (Map of Content) — An index or hub page structure in note-taking systems that organizes related topics and sub-pages hierarchically, serving as a navigational entry point for broader themes.
- MOC (Map of Content) methodology — An organizational approach that structures notes into hierarchical layers connected through hyperlinks, creating a navigable knowledge network rather than a flat filing system.
- MOC (Map of Content) Navigation Structure — Organizational hub pages that aggregate related documentation links into hierarchical indices, serving as navigational entry points for broader topic areas like DevOps, tools, or specific applications.
- MOC (Map of Content) organization — A navigational hub structure that organizes related documentation links into a hierarchical index, serving as an entry point for exploring thematic topics.
- MOC (Map of Content) Pattern — Organizational hub page structure that aggregates related documentation links into a single navigational index
- MOC (Map of Contents) — A central index or hub page that organizes and links to related notes within a knowledge base, serving as a navigational entry point for a topic area.
- MOC (Methodology of Categories) — A categorical tagging or organizational system used within DEVPOS documentation for classifying and structuring technical content.
- Mock SMTP server — A testing utility that simulates an SMTP mail server without actually sending emails, allowing developers to test email functionality safely and in isolation.
- MockMvc Testing Framework — Spring's test framework for performing integration testing of MVC controllers without a full servlet container, using MockMvcBuilders and perform-expect pattern
- ModelKit vs VisionModelKit initialization paths — MLX Engine uses two distinct initialization paths based on model_type config: ModelKit for text models and vision models with specialized add-ons (supporting advanced optimizations), and VisionModelKit as a generic wrapper for mlx-vlm models (with limited feature support).
- Modern backend development complexity — Contemporary backend development requires mastering multiple programming languages, web frameworks, diverse database types (relational, NoSQL, caching), and infrastructure concepts like load balancing, auto-scaling, and database replication, necessitating container orchestration platforms like Kubernetes.
- Modular design patterns in Java — Architectural patterns and practices for organizing Java code into decoupled, reusable modules with explicit dependencies, including service loading, module layers, and migration strategies for legacy codebases.
- MoE (Mixture of Experts) architecture — Neural network design where parameters are divided into specialized expert modules, activating only relevant subsets per inference instead of all parameters, achieving 5x speed improvement over dense models
- MongoDB — A document-oriented NoSQL database system designed for flexible schema design and horizontal scalability, using BSON format for data storage and JavaScript-based query language.
- Monitoring and Observability Stack — Comprehensive monitoring architecture combining Prometheus for metrics collection, Grafana for visualization, Elasticsearch/Fluentd for logging, and distributed tracing for complete system observability in cloud-native environments.
- Monitoring Model — A comprehensive framework for system observability consisting of four core data types: metrics (quantitative measurements), logs (event records), tracing (request flow analysis), and health checks (status verification).
- mtime-based caching for file-based data — Caching strategy using file modification times (@cache_with_mtime() decorator) to avoid re-reading unchanged files, with cache invalidation triggered automatically by file watchers
- multi-agent Git collaboration rules — AGENTS.md 中定义的并行 agent Git 协作规范:每个 agent 只能 git add 自己修改的文件,禁止 git add -A,避免多 agent 同时修改同一 repo 时的冲突和覆盖问题。
- Multi-agent orchestration — Unified management platform supporting multiple AI agent providers (Claude Code, Codex, OpenClaw, OpenCode) with centralized task routing and execution monitoring.
- Multi-cluster KinD deployment considerations — When running multiple KinD clusters simultaneously, each cluster must be assigned a unique ip-octet parameter to prevent overlapping IP address ranges and network conflicts between clusters
- Multi-container volume sharing — The practice of configuring an emptyDir volume to be mounted in multiple containers within the same Pod with different mount paths, enabling file sharing and data exchange between co-located containers.
- Multi-environment configuration management — Pattern for managing application configurations across environments (dev, test, prod) using separate ConfigMaps and Secrets per namespace or environment, enabling single application image deployment with environment-specific settings.
- Multi-environment deployment patterns — Deployment strategies that manage application configuration across different environments (production, development, testing) by using template-based approaches to minimize duplication while allowing environment-specific parameter values.
- Multi-environment ZooKeeper deployment pattern — Splitting previously clustered ZooKeeper into standalone instances per environment (zk-test.od.com, zk-prod.od.com) to support environment-specific service discovery and registry coordination for Dubbo microservices.
- Multi-host Docker container networking — Technique for enabling direct communication between Docker containers running on different physical hosts by bridging network interfaces and allocating non-overlapping IP ranges to each host
- Multi-level caching — A caching architecture combining multiple cache layers (e.g., Caffeine + Redis) to optimize performance by leveraging fast local memory alongside distributed cache systems
- Multi-level context hierarchy — Creating chains of parent-child-grandchild relationships where deeply nested contexts can access beans from all ancestor contexts in the hierarchy chain.
- Multi-Modal Content Ingestion Pipeline — Unified routing system (ingest skill) that automatically detects input types and dispatches to specialized processors for links/articles/tweets, video/audio/PDF/books, meeting transcripts, GitHub repositories, and calendar events.
- Multi-network multicluster configuration — An Istio multicluster setup for clusters in different networks that requires exposing services through east-west gateways to enable cross-cluster load balancing.
- Multi-node PVC access considerations — In production multi-node clusters, Pods are distributed across nodes by scheduler unless using nodeSelector, requiring ReadWriteMany or ReadOnlyMany access modes for shared storage across nodes.
- Multi-selection text capture workflow — A technique for collecting multiple non-contiguous text selections from a single web page before batch-pasting them into a destination document.
- Multi-service container orchestration — The practice of coordinating multiple interconnected container services (e.g., a MySQL database with an Adminer web interface) as a single application unit using Docker Compose.
- Multi-stage Docker build — A Docker optimization technique that uses multiple FROM statements in a single Dockerfile to create intermediate build stages, allowing compilation in a full-featured environment before copying only the compiled binary to a minimal runtime image, reducing final image size.
- Multi-stage Docker build for Drone — A containerized build process using Alpine-based builder stage with Go 1.18 to compile Drone CI from source with custom build tags, then packaging the binary in a minimal runtime image.
- Multi-stage Docker builds — A Dockerfile optimization technique using multiple FROM statements to separate build-time dependencies from runtime artifacts, resulting in smaller final images through intermediate builder stages.
- Multi-stage Docker builds for Go — A Docker build pattern using separate stages for development, compilation, and runtime to optimize image sizes, copying only compiled binaries to minimal runtime containers like Alpine.
- Multi-stage Docker builds for Go applications — A Docker build optimization technique using separate build and runtime stages to compile Go applications in a full environment and package only the binary in a minimal Alpine runtime container, reducing final image size.
- Multi-stage Dockerfile instructions pattern — A Dockerfile construction pattern combining RUN commands for system updates, package installation, directory creation, and configuration modifications using sed, demonstrating sequential container environment setup.
- Multi-Stage Message Queue Workflow with Redis Coordination — A pattern using message queues (MQ1-MQ4) with Redis hash-based counters (queryDoneCount) to coordinate multi-project data joins, where each fanout listener queries different database tables and increments completion counters
- multi-stage-docker-build-for-go-applications — Optimizing Go container images using multi-stage Docker builds with separate dev (golang:1.15-alpine), build (compilation), and runtime (alpine) stages to produce minimal production images.
- Multica platform — Open-source AI agent management platform that transforms coding agents into autonomous team members with task assignment, progress tracking, and skill reuse capabilities.
- Multicluster isolation — The architectural principle of separating east-west (cluster-to-cluster) traffic from north-south (external ingress) traffic using dedicated gateway deployments to avoid traffic flooding and maintain clear traffic boundaries.
- Multidim Pod Autoscaler (MPA) — GCP GKE-exclusive multi-dimensional autoscaling feature (beta) that enables simultaneous horizontal scaling based on CPU metrics via HPA and vertical scaling based on memory metrics via VPA, requiring pre-set CPU requests/limits in deployment resources.
- Multipart form file upload pattern — End-to-end pattern for handling file uploads combining client-side jQuery form serialization, AJAX submission, and server-side Apache Commons FileUpload processing
- Multiple Spring ApplicationContext pattern — Using multiple AnnotationConfigApplicationContext instances with parent-child relationships to organize bean definitions and control visibility hierarchies in Spring applications.
- Multiple WSL2 instances — The practice of running multiple independent Ubuntu distributions simultaneously in Windows Subsystem for Linux 2, enabling isolated development environments on the same Windows machine.
- Multistage Docker builds — A Docker build technique that uses multiple stages in a single Dockerfile to separate build and runtime dependencies, resulting in smaller final images and cleaner separation of concerns.
- Mutable vs immutable reduction in streams — Distinction between collect operations which use mutable containers for accumulation and reduce operations which produce immutable results through repeated combination.
- MVCC (Multi-Version Concurrency Control) — A concurrency control mechanism in MySQL that allows multiple transactions to access the same data simultaneously without locking by maintaining multiple versions of data rows, addressing the conflicts between read and write operations.
- my-todo task tracking system — A personal task management and documentation tracking system using Markdown checkbox syntax to organize work items, documentation topics, and external resources.
- myid file — Server identification mechanism in ZooKeeper clusters where each node gets a numeric ID stored in a 'myid' file in its data directory to establish ensemble membership.
- MySQL 8.0 insecure initialization — Using the --initialize-insecure flag to create a MySQL database without setting a root password during initial setup, useful for development and testing environments.
- MySQL 8.0 native authentication — MySQL 8.0's default caching_sha2_password plugin and how to configure mysql_native_password for backward compatibility with older clients and applications.
- MySQL connection limit configuration — Resolution for MySQL connections being capped at 214 by setting the global max_connections parameter to a higher value using set global command, followed by PHP configuration updates for data visibility
- MySQL Database Performance Optimization — Database optimization techniques covering indexing strategies (B+ tree structure), query analysis with EXPLAIN, transaction isolation levels, MVCC concurrency control, InnoDB vs MyISAM storage engines, and slow query log analysis.
- MySQL error log configuration — The error logging system in MySQL that tracks database errors and critical events, configurable via the log_error system variable and accessible through SHOW VARIABLES commands.
- MySQL EXPLAIN execution plan types — Classification of query execution strategies in MySQL's EXPLAIN output, ranging from full table scans (ALL) to optimized single-row lookups (const/system), indicating how indexes are utilized during query execution.
- MySQL InnoDB performance tuning — Configuration parameters for optimizing InnoDB storage engine performance, including buffer pool size, log file settings, thread concurrency, and file-per-table storage.
- MySQL learning progression path — A structured curriculum pathway for learning MySQL database skills progressing from introductory (菜鸟) to expert (大牛) proficiency levels
- MySQL logging configuration — Setting up general query log, slow query log, and error log in MySQL for monitoring database performance, troubleshooting queries, and tracking system issues.
- MySQL ODBC driver configuration — The process of installing and configuring Open Database Connectivity (ODBC) drivers to enable applications like ER/Studio to connect to MySQL databases, with 32-bit drivers specifically required for 32-bit applications
- MySQL pod access from Kubernetes — Accessing MySQL database services running inside Kubernetes pods through port forwarding, enabling local database clients to connect to containerized MySQL instances.
- MySQL pod access via kubectl — Accessing MySQL database pods in Kubernetes clusters through kubectl port-forwarding, which maps container ports (3306) to localhost for database administration and querying.
- MySQL portable installation — Running MySQL 8.0 on Windows without system service installation using ZIP archive distribution, custom my.ini configuration, and batch file startup.
- MySQL root user remote access configuration — Procedure to enable remote connections for MySQL root user by updating host from localhost to wildcard (%) and setting authentication credentials.
- MySQL slow query log — A MySQL logging mechanism that records queries exceeding configured time thresholds, used for performance analysis and optimization. Can be enabled dynamically via SET GLOBAL or persistently in my.cnf configuration.
- MySQL transaction isolation levels — The four isolation levels (READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SERIALIZABLE) that control how concurrent transactions interact and determine which phenomena (dirty reads, non-repeatable reads, phantom reads) can occur.
- mysql-docker-environment-configuration — MySQL container configuration through environment variables for user creation (MYSQL_USER, MYSQL_PASSWORD), database initialization (MYSQL_DATABASE), and root password setup (MYSQL_ROOT_PASSWORD).
- Name-based virtual hosting in Kubernetes — Technique that allows hosting multiple domain names (hostnames) on a single IP address by routing requests to different services based on the HTTP Host header, configured through Ingress rules.
- Namespace resource lifecycle — When a namespace is deleted, all resources contained within it are automatically deleted, providing clean resource isolation and cleanup capabilities
- Namespace system calls — Three Linux system calls—clone(), setns(), and unshare()—that enable creation, joining, and manipulation of namespaces for process isolation
- Namespace-scoped resource naming — Resource names must be unique within a namespace but can be duplicated across different namespaces, enabling flexible resource organization without naming conflicts
- Navigation Timing API — Browser API (window.performance.timing) that provides detailed timestamps for page lifecycle events including DNS lookup, TCP connection, request/response, DOM parsing, and load events, enabling calculation of performance metrics like white screen time, domready time, and onload time.
- Navigation Timing API performance calculations — Mathematical formulas for calculating web performance metrics using Navigation Timing API timestamps, including DNS lookup time, TCP connection time, request/response time, DOM parsing time, and overall page load duration.
- Nested admonitions — The ability to embed one admonition block inside another, creating hierarchical information structures with multiple levels of styled callouts.
- Nested Callouts — The ability to embed callout blocks within other callout blocks, creating hierarchical information structures with multiple layers of depth, demonstrated through examples with three-level nesting.
- Netcat bidirectional communication — Netcat's ability to establish two-way communication channels between sender and receiver, demonstrated through send/receive test scenarios
- Netcat Windows listener mode — Windows Netcat command syntax for persistent listening mode using -l (listen), -L (listen harder/restart), and -p (port) flags
- netstat command — Windows network statistics utility for displaying active TCP connections, ports, and process identifiers with options like -abno for detailed information
- Netty Bootstrap Pattern — The standard Netty server and client initialization pattern using EventLoopGroup, ServerBootstrap/Bootstrap, Channel configuration, and ChannelInitializer with ChannelPipeline setup.
- Netty ByteBuf memory management — Netty's custom buffer management system using reference counting and explicit retain/release operations for efficient zero-copy operations and pooled memory allocation
- Netty Channel — A core abstraction in Netty representing a nexus to a network socket or I/O-capable component that handles asynchronous operations like read, write, connect, and bind
- Netty Channel Thread Safety — Channel implementations in Netty are guaranteed to be thread-safe, allowing Channel references to be stored and used across threads without synchronization concerns when sending data to remote endpoints.
- Netty ChannelPipeline — A container of ChannelHandlers that processes or intercepts inbound and outbound operations for a channel, created automatically for each new channel and enabling dynamic handler addition and removal.
- Netty connection acceptance flow — The multi-phase process from OP_ACCEPT event triggering to new connection registration, including NioSocketChannel creation, selector binding, and OP_READ registration for I/O readiness.
- Netty EventExecutor Architecture — Core executor components in Netty including SingleThreadEventExecutor, SingleThreadEventLoop, and EmbeddedEventLoop that form the foundation for event-driven task execution.
- Netty EventLoopGroup Hierarchy — The class hierarchy and architectural structure of EventLoopGroup in Netty, including NioEventLoopGroup, MultithreadEventLoopGroup, and DefaultEventLoopGroup implementations and their inheritance relationships.
- Netty Future — Netty's extension of the Future interface for asynchronous operation results in network channels, with success/failure/cancellation states and methods like isDone(), isSuccess(), and cause().
- Netty handler execution order — Inbound events execute through handlers in insertion order (1, 2, 3...) while outbound events execute in reverse insertion order (..., 3, 2, 1), creating two directional event flows.
- Netty HTTP client exception handling with response body — Technique for preserving and accessing HTTP response body content even when requests return error status codes (like 404 or 500), which typically discard response data in standard HTTP clients.
- Netty Promise — A writable Future in Netty that allows setting the result programmatically, serving as the producer side of asynchronous operations that consumers observe through Future.
- Netty Reactive Network Framework — Asynchronous event-driven network application framework for high-performance protocol servers and clients, providing non-blocking I/O, zero-copy capabilities, and extensible pipeline architecture for scalable network programming.
- Netty Reactor Implementation — Application of the Reactor pattern within the Netty framework, combining Java NIO capabilities with event-driven architecture to build scalable network applications.
- Netty-based HTTP clients — HTTP client implementations built on Netty framework for high-performance network operations
- Network deployment evolution — The historical progression from physical server deployment through virtualization (VM) to containerization, each addressing resource allocation and isolation challenges of the previous era.
- Neural network classification workflow — Four-step process for building classification models: extract entity features, define neural network structure, train model with parameter adjustment, and perform predictions.
- NFS shared storage configuration — Setting up Network File System (NFS) server with exports configuration for shared volume storage accessible by multiple client machines in the infrastructure
- NFS Volume — Network File System volume that mounts remote network storage into Pods. Unlike emptyDir, data persists beyond Pod deletion and can be shared across Pods. Often used with cloud storage services.
- ng-repeat built-in properties — Special variables available within ng-repeat directives in AngularJS: $index (current iteration index), $first (boolean for first item), and $last (boolean for last item).
- nginx configuration syntax — The declarative configuration language used by the nginx web server, featuring context blocks (http, server, location, events), directives, variables (prefixed with $), and curly-brace block structure with # comments.
- Nginx directives reference — Comprehensive listing of nginx configuration directives organized by functional categories including proxy settings, caching, SSL, logging, performance tuning, and upstream server management.
- Nginx Ingress Canary Annotations — Configuration annotations in Nginx Ingress Controller that enable canary deployment through header-based, cookie-based, or weight-based traffic routing between service versions.
- NGINX Ingress Controller — An ingress controller implementation that uses NGINX as a reverse proxy and load balancer to handle Kubernetes Ingress resources, specified via the ingressClassName field.
- NGINX Ingress Controller installation — Installation of ingress-nginx controller using kubectl apply with official static manifests, which creates namespace, service accounts, RBAC resources, services, deployment, and ingressclass resources.
- NGINX upstream proxy configuration — Reverse proxy configuration pattern defining upstream backend servers with load balancing, health checks (max_fails, fail_timeout), and header forwarding (x-forwarded-for).
- ngrok — A secure tunneling service that exposes local development servers to the public internet through temporary URLs, commonly used for webhook testing and bot development.
- Ngrok tunneling for local development — Secure tunneling service that exposes local development servers to the public internet through temporary URLs, enabling webhook testing and external access during development.
- NioEventLoop thread model — Netty's event loop architecture using ThreadPerTaskExecutor with DefaultThreadFactory, where FastThreadLocalThread contains an InternalThreadLocalMap for optimized thread-local storage, and the event loop runs on a single thread processing I/O events.
- NioServerSocketChannel — A concrete Netty Channel implementation for server-side non-blocking I/O operations using Java NIO, extending the abstract Channel functionality
- NioServerSocketChannel initialization lifecycle — The complete startup sequence of a Netty server channel, including channel creation through ChannelFactory, pipeline initialization, configuration options (options0, attrs0, currentChildOptions), and registration with an EventLoop through doRegister() and doBind().
- NLP-based compression — Rule-based compression using spaCy for fast (<100ms) token reduction across 15+ languages, achieving 15-30% compression through grammatical pattern removal with completely offline operation.
- Node selector migration pattern — A workload migration technique using Kubernetes nodeSelector to gradually shift pods between node pools with staggered timing to maintain service availability.
- node-exporter — Host-level metrics collector deployed as DaemonSet to all worker nodes, monitoring compute node resources including CPU, memory, disk, and network metrics by exposing host filesystem paths (/proc, /sys)
- Node.js web application development — Server-side web application development using Node.js runtime and JavaScript, focusing on backend services and APIs
- NodePort service type — A Kubernetes Service type that exposes the service externally on each node's IP at a static port, allowing access to the service from outside the cluster by targeting any node's IP address with the assigned port.
- NodePort service type configuration — A Kubernetes service configuration that exposes the service externally through a specific port on each cluster node, commonly used to make internal services like Dashboard accessible from outside the cluster
- nohup persistent process execution — Technique for keeping processes running after logout by redirecting output to files and using nohup to ignore hangup signals
- Non-structured note organization — The principle of avoiding rigid hierarchical structures in favor of organic, link-based organization that emerges naturally from connections.
- Note independence principle — Each note should be self-contained and comprehensible on its own, allowing notes to be moved, processed, or combined without losing context.
- Note linking and annotation — The practice of creating explicit connections between notes with explanatory context about why the relationship exists, enabling knowledge networks and structured idea development.
- Note linking practices — The practice of creating explicit connections between notes with explanatory context about why the relationship exists, enabling knowledge networks and structured idea development.
- Notepad++ User-Defined Language — Syntax highlighting configuration system in Notepad++ that allows users to define custom language rules through XML files, enabling color-coded editing for unsupported file types.
- Nous Portal — A unified API gateway and authentication platform for accessing multiple AI models, including Xiaomi MiMo V2 Pro, through provider-agnostic endpoints compatible with OpenAI/Goose/OpenRouter.
- Numbered prefix convention — A naming convention using numeric prefixes (e.g., 000-) to control sort order and establish hierarchical relationships between documents, often prioritized by importance or abstraction level.
- nuwa-skill — A Claude Code Skill (Meta-Skill) that systematically extracts cognitive frameworks from public figures—capturing mental models, decision heuristics, and expression DNA to generate runnable AI personas that emulate how experts think, not just what they say.
- NVFP4 quantization format — NVIDIA 4-bit quantization format used to compress large language models like Qwen 3.5 35B from their original size to approximately 18.66 GB, enabling them to run on consumer hardware with 32 GB RAM.
- NVM (Node Version Manager) — A tool for managing multiple active Node.js versions on a single machine, allowing developers to switch between different Node versions as needed for different projects.
- Object to dictionary conversion in Python — Converting Python class instances to dictionaries using dict property to make them JSON serializable, since JSON libraries cannot serialize custom class objects directly
- Observability pipeline configuration — Structured YAML definition combining receivers, processors, and exporters into named pipelines that determine how telemetry flows through the OpenTelemetry Collector
- Observable Execution Pattern — Design principle where every tool call is visible to users through callbacks and progress displays, preventing black-box operations and enabling transparency
- Obsidian Admonition — A specific Obsidian plugin that adds callout boxes and styled content blocks for enhanced document organization and visual emphasis
- Obsidian Admonition plugin — An Obsidian community plugin that creates styled callout blocks and information boxes with customizable types, colors, icons, and collapsible states.
- Obsidian button syntax — A code block syntax for creating interactive command buttons in Obsidian using parameters like name, type, action, and color to trigger functionality.
- Obsidian Buttons Plugin — An Obsidian plugin that creates clickable buttons within notes using code block syntax, allowing users to execute actions, navigate links, and integrate with other plugins like Templater.
- Obsidian callout types — Complete list of native Obsidian callout/admonition types including note, abstract, info, tip, success, question, warning, failure, danger, bug, example, and quote with their aliases.
- Obsidian callouts — Native syntax in Obsidian for creating styled callout boxes (admonitions) with various types like note, info, warning, success, and question, supporting markdown, links, and embedded content.
- Obsidian callouts system — Built-in callout/admonition syntax in Obsidian supporting multiple types (note, info, warning, success, etc.) with collapsible blocks, custom icons, colors, and full markdown rendering.
- Obsidian data classification — Systematic approaches for categorizing and organizing information within the Obsidian note-taking environment to enable efficient retrieval and knowledge structuring.
- Obsidian getting started resources — Official Obsidian Chinese documentation and help resources available at publish.obsidian.md/help-zh for new users learning the note-taking application.
- Obsidian Homepage plugin — An Obsidian plugin that allows users to configure a specific note to open automatically when the application starts, rather than defaulting to the most recently used note.
- Obsidian hot keys — Essential keyboard shortcuts for Obsidian including Ctrl+Shift+I for opening the developer console and Ctrl+W for closing windows.
- Obsidian learning resources — Curated collection of tutorials, documentation, and guides for mastering Obsidian as a note-taking and knowledge management tool.
- Obsidian Note-Taking Ecosystem — Personal knowledge base application supporting markdown formatting, bidirectional linking, plugins (Admonition, Templater, QuickAdd), graph visualization, and extensive customization for Zettelkasten-style knowledge management.
- Obsidian plugin ecosystem — Extensible functionality for Obsidian through community plugins that enhance core capabilities for note-taking, organization, and knowledge management workflows.
- Obsidian Plugin System — The extensibility framework for the Obsidian note-taking application that allows third-party developers to create plugins adding new functionality to the core application.
- Obsidian Template Ecosystem — External GitHub repositories and community resources providing starter templates for the Obsidian note-taking application.
- Obsidian-VuePress Asset Path Compatibility — Configuration techniques for making Obsidian attachment paths compatible with VuePress, including disabling certain Obsidian path settings and using relative paths
- OCI image — The standardized container image format specification defined by the Open Container Initiative, used for packaging applications and their dependencies.
- Official software artifact management — The practice of using only official software packages and files with cloud backup storage to prevent issues from updates or discontinued downloads.
- OhMyMock — A developer-focused Chrome extension for API mocking and testing, providing tools for simulating HTTP responses during web development.
- Okteto CLI — A command-line tool for creating and managing remote development environments powered by Kubernetes, with commands for context management, namespace operations, build/deploy workflows, and development container synchronization.
- Okteto context management — Context switching mechanism in Okteto CLI that allows developers to configure and switch between different Kubernetes clusters and namespaces, similar to kubectl context management.
- Okteto development workflow — Two-step process for setting up cloud development environments: (1) deploy application code using 'okteto deploy', then (2) launch the development container using 'okteto up' for live development in Kubernetes.
- Okteto kubeconfig command — CLI command that downloads and configures Kubernetes credentials for the cluster selected via Okteto context, enabling standard kubectl operations against Okteto-managed clusters.
- Okteto namespace operations — Commands for listing, creating, and deleting Kubernetes namespaces within Okteto, enabling isolation between different environments (e.g., prod, qa, dev) for the same user account.
- okteto.yml manifest — Configuration file defining build, deploy, and development environment settings for Okteto, including image definitions, deployment commands, and development container specifications.
- Oleg Šelajev — GraalVM developer advocate known for educational content explaining GraalVM concepts, particularly native image technology and performance optimization techniques.
- Ollama MLX support for Apple Silicon — Ollama's integration with Apple's MLX inference engine, delivering nearly 2x performance improvements for running large language models locally on MacBook devices with Apple Silicon chips.
- Ollama Web UI for local LLMs — ChatGPT-style web interface provided by Ollama for interacting with locally running language models, featuring parameter adjustment and model switching capabilities.
- Online Front-end Editors — Browser-based code editing environments that allow developers to write, test, and share HTML, CSS, and JavaScript code without local setup, including tools like JSBin and JSFiddle.
- Opaque Secret type — The default and most commonly used Kubernetes Secret type for storing arbitrary user-defined data in base64-encoded format, typically used for passwords, keys, and other sensitive configuration data.
- Open source contribution workflow — The systematic process for participating in open source development, including finding issues, forking repositories, making changes, and submitting pull requests
- Open Source Friday — A global initiative encouraging developers to dedicate time on Fridays to contribute to open-source software they use and love, with resources and guidance available through opensourcefriday.com
- OpenSSH Certificate — SSH-specific certificates used for authentication in secure shell connections, with specific best practices for deployment and key management
- OpenSSL Certificate Conversion Commands — Command-line operations for converting between certificate formats, including generating CSR files, exporting PFX to PEM format, combining key and certificate into PFX, and extracting certificates from remote servers using openssl s_client.
- OpenSSL Certificate Generation Workflow — The step-by-step process using OpenSSL commands to generate root CA private keys, create self-signed certificates, and configure them for trusted use
- OpenSSL certificate management — Practical guide to using OpenSSL command-line tools for generating, managing, and troubleshooting SSL/TLS certificates, including self-signed certificates, CA creation, and certificate format conversion.
- OpenSSL self-signed CA certificate generation — Creating a private Certificate Authority using OpenSSL commands to generate RSA key pairs and self-signed root certificates for development environments.
- OpenSSL tool — A command-line utility for implementing cryptographic operations, including generating certificates, managing certificate authorities, and configuring SSL/TLS security parameters.
- OpenSSL憑證簽署指令 — 使用'openssl req -x509'指令生成自簽憑證,參數包括-days設定有效期、-newkey生成金鑰、-keyout指定金鑰輸出路徑、-out指定憑證輸出路徑,以及-nodes跳過密碼保護。
- OpenSSL自建CA證書 — 使用OpenSSL工具建立私有Certificate Authority (CA),包括生成RSA金鑰對、自簽CA憑證,以及設定金鑰密碼保護(-des3)或無密碼保護的選項。
- OpenTelemetry Collector — A vendor-agnostic observability data pipeline that receives, processes, and exports telemetry data through configurable receivers, processors, and exporters
- Operator Pattern (Claude Code) — A multi-terminal workflow pattern using
claude -wto create isolated workspaces with independent git worktrees and branches, enabling parallel task execution with human coordination and clean context windows for each instance. - Oracle connection methods (SERVICE_NAME vs SID) — Two distinct Oracle database connection identifier methods: SERVICE_NAME for service-based connections and SID for instance-based connections
- Oracle cross-database data recovery — A data recovery technique using database links to insert records from remote tables into local tables, typically used for disaster recovery scenarios where data needs to be restored from a backup or replica database.
- Oracle Database Docker deployment — Containerized Oracle Database setup using Docker with datagrip/oracle:11.2 image, port mappings, and volume persistence
- Oracle Database Link — A database object in Oracle that allows queries and data manipulation across different database instances, enabling cross-database operations through SQL queries with the @dblink syntax.
- Oracle date and timestamp formatting — ALTER SESSION commands to set NLS_DATE_FORMAT and NLS_TIMESTAMP_FORMAT parameters for consistent datetime display format in query results
- Oracle JDBC connection string formats — Two JDBC URL formats for Oracle thin driver: using SID with colon separator (:@host:port:sid) and using service name with slash separator (:@host:port/serviceName)
- Oracle JDBC Maven installation — Manual installation of Oracle JDBC drivers to local Maven repository using mvn install:install-file with Oracle-specific groupId and artifactId parameters
- Oracle NLS parameter queries — Using v$nls_parameters view and userenv('language') function to check database character set and language settings for diagnosing encoding issues
- Oracle PL/SQL Developer configuration — Configuration setup for PL/SQL Developer IDE including Oracle Instant client paths and OCI library settings
- Oracle tnsnames.ora configuration format — TNS network configuration file format defining database connection parameters using SERVICE_NAME and SID connection methods
- Oracle XE default credentials — Standard default credentials for Oracle Express Edition including system users (SYS/SYSTEM: oracle) and OS users (root/install, oracle/install)
- Orchestrator Pattern — Hierarchical multi-agent pattern with two roles—leaf agents (complete tasks, cannot delegate) and orchestrator agents (can delegate_task to spawn sub-agents)—enabling structured parallel task execution with configurable depth limits and concurrency controls.
- ORIG_HEAD recovery — A Git recovery mechanism using
git reset ORIG_HEAD --hardto revert to the state before a rebase operation when errors occur, particularly useful after incorrect rebase operations like rebasing master onto a feature branch. - os.Args command-line argument access — Go's os package provides access to command-line arguments through the Args []string variable, which includes the program name at index 0.
- OSS session data sharing — 鼓励用户将 coding session 数据通过 pi-share-hf 发布到 HuggingFace,提供真实任务数据而非玩具 benchmark,用于 AI Agent 能力的社区评估和研究。
- OTLP (OpenTelemetry Protocol) — The protocol specification and data format used to transmit telemetry data between applications and collectors, supporting both gRPC and HTTP
- Outline-First Learning Approach — A learning methodology that emphasizes establishing a technology's high-level structure and key components before diving into implementation details, using mind maps and video tutorials to build mental models.
- PaaS platform requirements — Essential infrastructure components needed to build a Platform-as-a-Service including container runtime, orchestration layer, middleware clusters, distributed storage, monitoring/logging systems, and CI/CD pipelines.
- pack (CLI tool) — A CLI tool maintained by the Cloud Native Buildpacks project that builds OCI images without requiring Dockerfiles by automatically detecting build systems like Maven or Gradle.
- Packet Capture Setup — The configuration process required to enable network traffic interception, including proxy settings and device network configuration for both desktop and mobile platforms.
- Pagination-based Report Document Assembly — A strategy for handling large reports by generating individual pages as temporary files, then combining them into a final document once all pages are complete.
- PAM loginuid configuration for SSH — The practice of modifying the PAM (Pluggable Authentication Modules) configuration in SSH by commenting out the pam_loginuid.so requirement to allow SSH service to run properly within Docker containers where loginuid tracking may not be available.
- Panic-based error handling in Go — Using the panic() function to immediately terminate execution when unrecoverable errors occur, typically during development when not prepared to handle errors gracefully.
- partitioningBy collector — Specialized grouping collector that partitions elements into two groups based on a boolean predicate, returning a Map with true/false keys.
- Patch verification workflow — A three-step verification process using git apply --stat to review patch contents, --check to test compatibility without applying, followed by actual application with git am.
- Payment Channel Architecture — The structural design and flow of payment systems, including the integration layer between merchants and upstream payment providers, covering channel selection, routing, and transaction processing workflows.
- Payment domain modeling — The practice of creating structured domain models for payment systems to represent entities like transactions, merchants, channels, and orders in a microservices architecture.
- Performance Metrics Calculation — Mathematical formulas derived from Navigation Timing API timestamps to calculate specific performance indicators: DNS query time (domainLookupEnd - domainLookupStart), TCP connection time (connectEnd - connectStart), request time (responseEnd - responseStart), DOM parsing time (domComplete - domInteractive), white screen time (domLoading - fetchStart), domready time (domContentLoadedEventEnd - fetchStart), and onload time (loadEventEnd - fetchStart).
- Performance monitoring with Elasticsearch and Kibana — Architecture pattern for storing and visualizing frontend performance metrics by sending JSON-formatted timing data to Elasticsearch with optional middleware layer, enabling analysis through Kibana dashboards.
- Performance testing tools — External tools and services for analyzing web performance including Google PageSpeed Insights, GTmetrix, and Chrome DevTools, which provide insights and optimization recommendations.
- Permanent notes — Refined, durable notes in Zettelkasten systems that represent fully-formed ideas meant for long-term retention and integration into the knowledge network through cross-linking
- Persistent Volume Claim (PVC) — Kubernetes abstraction for requesting storage resources from a cluster, decoupling storage from pod lifecycle and enabling persistent data management.
- Persistent Volumes — Storage resources with lifecycle independent of Pods, surviving Pod deletion and restart. Enables data persistence across container restarts. Implemented through PV (PersistentVolume) and PVC (PersistentVolumeClaim) objects.
- PersistentVolume (PV) — Kubernetes abstraction for storage resources with a lifecycle independent of Pods, enabling data persistence across Pod lifecycle events through static or dynamic provisioning.
- PersistentVolumeClaim (PVC) — User-facing request for storage resources that specifies size, access modes, and storage class, continuously seeking matching PVs until binding occurs or remaining in Pending state.
- Personal knowledge management — The systematic practice of capturing, organizing, and maintaining personal information, ideas, and learning resources using digital tools and structured workflows to support knowledge development and retrieval.
- Personal Knowledge Management (PKM) — Systematic practice of capturing, organizing, and maintaining personal information, ideas, and learning resources using digital tools and structured workflows to support knowledge development and retrieval
- Personal Knowledge Management Maps — The practice of organizing personal knowledge into structured maps (MOC - Maps of Content) covering different domains like DevOps learning, tool documentation, and specific applications like Obsidian.
- Personal Knowledge Management System — A personal knowledge management approach using GitHub projects for organization, YouTube creators for learning resources, and Obsidian for documentation, with structured maps (MOC) for DevOps, tools, and learning workflows.
- Personal paraphrasing principle — The practice of explaining ideas in your own words rather than copying text, ensuring understanding and preventing plagiarism in note-taking.
- PGLite Embedded PostgreSQL — Local embedded PostgreSQL 17.5 database that enables zero-configuration brain initialization in 2 seconds without external services, with optional upgrade path to Supabase for production workloads.
- Pi Monorepo — 面向 AI Agent 开发的 TypeScript 全栈工具包,采用 npm workspaces monorepo 架构,包含统一 LLM API、Agent runtime、交互式 coding agent CLI、TUI/Web UI 组件库等核心模块,由 libGDX 作者 Mario Zechner 开发并开源。
- pi-agent-core runtime — Agent 运行时核心,封装 tool calling、状态管理、事件流处理,支持 parallel/sequential 工具执行模式、steering/follow-up 消息注入、beforeToolCall/afterToolCall hooks,提供 AgentMessage 与 LLM Message 转换层。
- pi-ai unified LLM API — 统一抽象层,支持 20+ LLM provider(OpenAI、Anthropic、Google、Azure、AWS Bedrock、Ollama、vLLM 等),提供一致的 streaming 事件接口、跨 provider 切换、TypeBox schema 验证、token/cost 追踪和 OAuth 支持。
- pi-coding-agent CLI — 基于 pi-ai 和 pi-agent-core 构建的交互式编码 agent CLI,支持 20+ LLM provider 无缝切换、TUI 和 RPC 双模式、扩展系统(custom-provider、with-deps)、OAuth provider 集成,并自动发布 session 数据到 HuggingFace 供开源研究。
- pilot-agent — The Istio agent component that handles xDS protocol communication with the Istio control plane and provides configuration to proxies or applications; in proxyless mode, it runs standalone without Envoy.
- Pipeline parameterization — Technique for making deployment pipelines reusable by injecting variables for image names, version tags, git branches, and commit IDs, enabling flexible multi-environment deployments without code changes.
- PKCS#12 Format (.pfx/.p12) — A binary certificate format used primarily on Windows that stores server certificates, intermediate certificates, and private keys in a single encrypted, password-protected file, commonly requiring conversion from PEM format for cross-platform use.
- PKCS12 certificate conversion — OpenSSL command for converting TLS certificates and private keys into PFX format for compatibility with various server platforms
- PL/SQL character encoding configuration — Setting NLS_LANG environment variable (e.g., AMERICAN_AMERICA.AL32UTF8) before launching PL/SQL Developer to prevent character encoding issues and display Chinese characters correctly
- Platform-Agnostic Core Architecture — Design pattern where a single AIAgent class (~9,200 lines) serves multiple entry points (CLI, Gateway, ACP, Batch, API Server) with platform differences isolated to outer layers
- Plugin directory structure preservation — The requirement that when moving Eclipse plugins, the internal directory structure (features/ and plugins/ subdirectories) must be maintained exactly for the plugin to function correctly.
- Pod (Kubernetes) — The smallest deployable unit in Kubernetes that encapsulates one or more containers, running on Nodes and serving as the core abstraction around which all Kubernetes operations revolve.
- Pod annotation-based monitoring — Automatic service discovery pattern where Prometheus monitors pods annotated with prometheus_io_scrape=true, prometheus_io_port, and prometheus_io_path, or blackbox_* annotations for probe-based health checking.
- Pod label selector — A mechanism in Kubernetes that uses key-value label pairs on Pods to dynamically group and route traffic, enabling Services to automatically discover and target matching Pods even as they are replaced or scaled.
- Pod lifecycle and status — The states and phases of Kubernetes pods including creation (ContainerCreating), readiness (READY status), and termination, observable through kubectl get commands.
- Pod lifecycle data persistence — The principle that data stored in emptyDir volumes has a temporary nature tied to the Pod's lifecycle - created when the Pod starts and deleted when the Pod is removed, making it suitable for caches and temporary storage but not persistent data.
- Pod lifecycle states — The five fundamental states of a Kubernetes Pod: Pending (submitted but not scheduled), Running (scheduled and executing), Succeeded (all containers completed successfully), Failed (at least one container exited abnormally), and Unknown (communication issues between nodes).
- Pod restart policies — Configuration options (Always, OnFailure, Never) that determine how Kubernetes handles container termination and failures, with different defaults required by various controllers (Job, Deployment, DaemonSet)
- Podman — A daemonless container engine that allows containers to run without root privileges, providing enhanced security compared to Docker.
- Podman Desktop — A graphical interface application for managing containers and pods, providing a user-friendly alternative to command-line container operations.
- Port identification — Technique for identifying which applications are using specific network ports on Windows systems using netstat with process identification flags
- port-forwarding ingress-nginx controller locally — Using kubectl port-forward to expose ingress-nginx controller service locally for testing, mapping localhost:8080 to service port 80
- PostgreSQL container initialization in Kubernetes — Using ConfigMap volumes to inject SQL initialization scripts into PostgreSQL containers via the /docker-entrypoint-initdb.d directory for automatic schema setup
- Postman environment variables — Variable storage mechanism in Postman (pm.environment.set/get) for maintaining state across requests, such as authentication tokens and user identifiers.
- Postman test scripts — JavaScript code snippets that execute in the Postman sandbox to process API responses, extract data using pm.environment.set(), and dynamically set environment or global variables for subsequent requests.
- PowerShell Get-Process cmdlet — PowerShell command for retrieving and analyzing running process information including handles, memory usage (VM, WS, PM), process IDs, and thread details.
- PowerShell object filtering and selection — Techniques for extracting specific properties from PowerShell objects using Select-Object, Format-List, and Format-Table to customize output display.
- PowerShell object property access — Method of accessing nested properties and collections within PowerShell objects by assigning them to variables and using dot notation to explore object hierarchies like StartInfo.EnvironmentVariables.
- PowerShell package installation — Administrative PowerShell commands and scripts used to install package managers and their dependencies on Windows systems, including execution policy bypass and remote script invocation.
- PowerShell pipeline operations — Using the pipe operator (|) to chain commands and pass objects between cmdlets for sequential processing, such as retrieving process data and immediately sorting or formatting it.
- PowerShell profile prompt customization — Technique for customizing PowerShell prompts by modifying Profile.ps1 to include ANSI escape sequences that communicate current directory path to Windows Terminal for same-directory tab functionality.
- PowerShell script execution policy configuration — Windows security configuration requiring Set-ExecutionPolicy RemoteSigned to enable PowerShell script execution for monitoring tasks
- PowerShell sorting and comparison — Using Sort-Object cmdlet to order PowerShell objects based on specific properties, with options for ascending or descending sorting and custom property selection.
- PR contributor testing workflow — Istio contribution requirement where PR authors must test Bookinfo changes with their own Docker registry before official image builds.
- Predicate
interface — A Java functional interface representing a boolean-valued function of one argument, commonly used for filtering, testing, and conditional operations. - premain method — The entry point method for Java Agents that executes before the application's main method, accepting agent arguments and Instrumentation instance as parameters.
- Primary-Remote multicluster topology — An Istio multicluster configuration pattern where remote clusters access the control plane (istiod) hosted in a primary cluster through an exposed service via the east-west gateway.
- Private container registry with Harbor — Deployment and configuration of Harbor as an on-premise Docker registry for storing container images, including nginx reverse proxy configuration, DNS integration, and pushing/pulling images to local registry
- Private Docker Registry — A self-hosted Docker image repository for storing and managing container images privately within an organization's infrastructure.
- Private Docker registry with registry:2 — Running a private Docker registry using the official registry:2 image with volume mounting for persistent storage and port mapping for access.
- Private Maven repository — A Maven repository configuration for hosting private Java dependencies, with integration considerations for GitHub Packages authentication using personal access tokens.
- Private Maven repository hosting — Using GitHub Packages to host private Maven repositories, requiring proper authentication configuration via personal access tokens
- Problem-Driven Learning — An educational approach where practical hands-on implementation and solving real problems takes precedence over theoretical study, as troubleshooting reveals genuine knowledge gaps.
- Process supervision with Supervisor — Using Supervisor daemon to manage and monitor K8S components (etcd, API server, controller manager, scheduler, kubelet, kube-proxy), ensuring automatic restart and log collection through .ini configuration files.
- processMessageOrRequeue Pattern — Message processing pattern with configurable retry logic (default 6 retries, 1-minute delay) that updates task status and advances workflow stages based on Redis completion counters, with failure handling
- Profile Isolation — Multi-tenancy pattern where each profile has independent HERMES_HOME, config, memory, sessions, and gateway PID, enabling multiple concurrent isolated agent instances
- progress-bar-toast-indicator — The toastr progressBar option that displays a visual countdown indicator showing remaining time before the notification auto-dismisses
- Progressive Learning Review — A cyclical evaluation process after initial learning to assess whether continued study is necessary, whether solutions exist, and whether better alternatives have emerged.
- Project AIRI — Open-source AI VTuber project replicating Neuro-sama with real-time voice chat, gaming capabilities, and multi-platform support
- project-related-notes — Notes tied to specific projects or deliverables that are distinct from the permanent knowledge base but may eventually feed into it with relevant insights and findings.
- Prometheus alert rules — Rule definitions in rules.yml using PromQL expressions to trigger alerts on conditions like CPU/memory usage (>85%), disk space/inodes (<10%), network throughput, and HTTP probe failures, with severity labels and templated annotations.
- Prometheus alerting rules and PromQL — YAML-based rule definitions using PromQL expressions for threshold-based alerts covering host resource usage (CPU, memory, disk, network), HTTP probe status, SSL certificate expiry, pod resource consumption, and application-specific metrics
- Prometheus ecosystem components — The core architectural components that work with Prometheus including Alertmanager for alert routing, PushGateway for short-lived job metrics, Node Exporter for system metrics, and kube-state-metrics for Kubernetes resource scraping.
- Prometheus monitoring architecture — Google Borg-derived monitoring system using Pull-based metrics collection with TSDB storage, supporting Pushgateway for push-based sources, Alertmanager for alerting, and Grafana for visualization.
- Prometheus monitoring architecture components — Core Prometheus ecosystem components including Prometheus Server for metrics collection and storage, PushGateway for push-based metrics, Exporters/Jobs for data collection, Service Discovery mechanisms, Alertmanager for alert routing, and UI layers (Prometheus web UI, Grafana, API clients)
- Prometheus Operator — A Kubernetes operator that simplifies Prometheus deployment and management through Helm charts, providing compatibility layers and automated configuration for monitoring Kubernetes clusters.
- Prometheus Operator for Istio — Alternative to standard Prometheus deployment that uses ServiceMonitor and PodMonitor custom resources to manage Istio control plane and Envoy proxy monitoring, requiring metrics merging to be enabled.
- Prometheus RBAC configuration — ServiceAccount, ClusterRole, and ClusterRoleBinding setup that grants Prometheus permissions to read nodes, pods, services, endpoints, and metrics from Kubernetes API server and /metrics endpoints.
- Prometheus scrape configuration — Prometheus.yml configuration defining scrape jobs for etcd, Kubernetes API servers, pods, kubelet, cadvisor, kube-state-metrics, and blackbox probes using kubernetes_sd_configs for service discovery and relabel_configs for target filtering and label management.
- Prometheus service discovery configurations — Configuration patterns for discovering Kubernetes targets using kubernetes_sd_configs with roles (endpoints, pod, node) and relabel_configs for label manipulation and metric filtering
- Prometheus-based K8S Monitoring Stack — Enterprise monitoring architecture combining Prometheus, Grafana, and specialized exporters (kube-state-metrics, node-exporter, cAdvisor, blackbox-exporter) for comprehensive cluster and application observability.
- Prometheus平滑加载配置 — Prometheus配置重载技术,通过kill -SIGHUP
命令向运行中的Prometheus进程发送信号,实现规则文件(rules.yml)和配置文件的热加载,无需重启Pod。 - Prompt compression benchmarks — Evaluation framework measuring compression effectiveness across scenarios (system prompts, API docs, resumes), tracking token reduction rates and fact preservation to ensure semantic lossless compression (verified 13/13 facts retained).
- Prompt Stability Principle — System prompt invariant during conversation to maintain cache coherence, with no cache-breaking mutations except explicit user actions
- PropertyResolver and PropertySource — Spring's property resolution mechanism through PropertyResolver interface and @PropertySource annotation, enabling external property configuration and placeholder resolution with SpEL integration.
- Protocol Buffers — Google's language-neutral binary serialization format for efficient data transmission and storage in distributed systems.
- Proxy Pattern (代理模式) — One of the 23 classic GoF design patterns that provides a surrogate or placeholder for another object to control access to it, listed among the fundamental patterns in the comprehensive collection.
- Proxyee Down — An open-source, free high-speed HTTP downloader built on Netty framework for efficient big data downloading
- Proxyee Down project — An open-source high-speed HTTP downloader built on Netty framework that provides big data download capabilities with improved performance over traditional HTTP clients.
- Public Key Binding — The fundamental mechanism where a public key is associated with an entity's identity through certificate metadata, enabling cryptographic verification and authentication
- Public key cryptography algorithms — Asymmetric encryption methods including RSA, DSA, ECDSA, elliptic curve cryptography (ECC), and Diffie-Hellman key exchange variants, used for secure key establishment and digital signatures.
- Public key distribution problem — The security challenge of verifying that a public key obtained over a network is authentic and belongs to the claimed entity
- Public Key Infrastructure (PKI) — Comprehensive guide for engineers covering certificate authorities, certificate chains, trust relationships, and the complete infrastructure for managing digital certificates and cryptographic keys in production environments.
- Pull with rebase — A Git configuration and workflow alternative to merge that uses
git pull --rebaseto integrate remote changes, avoiding unnecessary merge commits and maintaining a cleaner commit history. - Pulumi — A modern Infrastructure as Code platform that enables developers to define and manage cloud infrastructure using familiar programming languages (JavaScript, Java, Python, Go, etc.) instead of domain-specific configuration languages like YAML.
- PUT vs PATCH in REST APIs — Distinction between PUT (complete resource replacement) and PATCH (partial resource update) for modifying server resources, with different semantics and use cases
- PV & PVC binding mechanism — The automatic matching process where PVCs search for PVs with compatible capacity, access modes, and storage class, remaining in Pending state until a suitable PV is found and bound.
- PV lifecycle and status states — PVs transition through four states: Available (unbound), Bound (claimed by PVC), Released (PVC deleted but not reclaimed), and Failed (reclamation failure), with lifecycle independent of Pods.
- PV lifecycle states — The four states of a PersistentVolume: Available (free for binding), Bound (attached to PVC), Released (PVC deleted but not reclaimed), and Failed (reclaim failed).
- PV reclaim policies — Three strategies for handling PV after PVC deletion: Retain (manual recovery), Recycle (deprecated, rm -rf), and Delete (automatic backend storage deletion with PVC).
- Python Alpine Docker development workflow — Using Alpine-based Python Docker images with volume mounting for iterative development, keeping container images minimal while enabling hot-reload of source code
- Python classes and objects — Object-oriented programming constructs that group related data and behavior into reusable types, using constructors (init) for initialization and methods for behavior.
- Python control flow statements — Conditional logic constructs (if/elif/else) that enable branching execution paths based on boolean conditions, used to implement business rules and validation.
- Python CSV module — Python's built-in csv library for reading and writing CSV files, providing DictReader and csv.writer utilities for handling structured data with headers, as demonstrated in the customer data example.
- Python data structures — Built-in collection types including arrays/lists (ordered sequences), dictionaries (key-value mappings), and their operations like append, remove, and indexed access.
- Python file access modes — The four primary file access modes: 'r' (read), 'w' (write), 'a' (append), and 'x' (create), each with distinct behavior regarding file existence and content handling
- Python file open() function — Python's built-in open() function for accessing files with different access modes (read, write, append, create) and the importance of proper file closing
- Python file-based data persistence patterns — Reading and writing data to files for persistence, demonstrated with CSV format earlier and JSON format here, using context managers (with statements) for proper file handling
- Python functions — Reusable code blocks that encapsulate logic with optional inputs and return values, promoting single-responsibility design and testability.
- Python JSON library (json module) — Python's built-in json module provides functions like json.dumps() and json.loads() for converting between Python dictionaries and JSON formatted strings
- Python JSON serialization basics — Python's json module provides dumps() and loads() functions for converting between dictionaries and JSON formatted strings
- Python loops — Iteration constructs including while loops (condition-based repetition) and for loops (iterating over collections), used to process lists and dictionaries.
- Python package management with pip and requirements.txt — Managing external Python dependencies through pip and requirements.txt files for reproducible environments and version control
- Quay.io — A Docker container registry service for storing, managing, and distributing Docker images, similar to Docker Hub but offered by Red Hat.
- QuickAdd and Templater integration — QuickAdd depends on the Templater plugin for file creation workflows, though there is a known conflict where Templater's 'create new file' template functionality becomes disabled when QuickAdd is active.
- QuickAdd Capture mode — A QuickAdd choice type for capturing input content and appending it to existing files, commonly used for fleeting notes and intermittent journaling.
- QuickAdd choice types — The four configuration options in QuickAdd: Capture (append to files), Template (create from Templater), Multi (nested menus), and Macro (custom JavaScript)
- QuickAdd plugin — Obsidian plugin for rapid data entry and automation, allowing users to quickly add content following predefined patterns or templates.
- Qwen 3.5 35B local deployment — The Qwen 3.5 35 billion parameter language model can run locally on MacBooks with sufficient RAM (32 GB recommended) when using NVFP4 quantization through Ollama's MLX backend.
- Qwen 3.6 27B for Agentic Coding — 阿里通义推出的27B参数开源模型,针对代理式编码(Agentic Coding)和长上下文推理优化,支持仓库级代码理解、原生工具调用和长对话任务线程保持
- Qwen deployment comparison: Ollama vs vLLM vs MLX — 三种Qwen模型部署方案对比:Ollama简单但仅支持35B A3B1版本、vLLM完整支持27B适合正式Agent工作流、MLX即将支持Apple Silicon本地部署
- RabbitMQ — An open-source message broker that implements the AMQP protocol, providing message queuing capabilities for distributed systems and enabling asynchronous communication between applications.
- RabbitMQ Dead Letter Queue with Delay Pattern — Error handling pattern using dead letter exchanges and TTL-based delay queues to manage failed report generation and retry with backoff.
- RabbitMQ Message Queue Implementation — AMQP message broker providing six queue modes (simple, work, publish/subscribe, routing, topics, RPC), dead letter queue functionality, delay patterns with TTL, and integration with Spring Boot for reliable asynchronous messaging.
- RabbitMQ Queue Modes — Six different queue configuration patterns in RabbitMQ that support various messaging scenarios including simple queues, work queues, publish/subscribe, routing, topics, and RPC patterns.
- Radical Flexibility Pattern — Architecture supporting 18+ LLM providers, 14 messaging platforms, 6 terminal backends, and 3 API modes without vendor lock-in through pluggable adapters
- RAM-backed emptyDir (tmpfs) — A high-performance emptyDir configuration option where the medium field is set to Memory, storing data in RAM as a tmpfs filesystem instead of on disk, providing faster access at the cost of volatility and memory constraints.
- Range-based pagination — A pagination approach using BETWEEN or comparison operators on a sequential key to fetch a specific range of rows, offering optimal performance when the key distribution is known and contiguous.
- Rapid framework formation — The practice of quickly establishing document or project structures using templates to accelerate workflow and reduce initial setup time
- Rapid skill learning principles — Ten guidelines for efficient skill acquisition including: focus on one skill, set reasonable targets, gather essential tools, eliminate distractions, allocate dedicated practice time, establish quick feedback loops, emphasize quantity over perfection in early practice.
- Rapid Technology Learning Framework — A systematic approach for quickly learning new technologies by focusing on essential concepts through video tutorials, official documentation, and hands-on practice.
- RBAC ClusterRoleBinding for Dashboard — ClusterRoleBinding configuration that grants cluster-admin permissions to ServiceAccounts, enabling Dashboard access to Kubernetes resources across all namespaces.
- RBAC ClusterRoleBinding for dashboard access — Role-based access control configuration that binds the cluster-admin ClusterRole to a ServiceAccount (default in kube-system namespace) to grant administrative permissions for Kubernetes Dashboard access.
- RBAC in Kubernetes — Role-Based Access Control configuration in Kubernetes using ServiceAccounts, ClusterRoles, and ClusterRoleBindings to authorize component access to cluster resources
- RBAC Permission Rules — The structure of permissions in Kubernetes RBAC consisting of three components: apiGroups (resource API groups like core, apps, batch), resources (object types like pods, deployments), and verbs (actions like get, list, create, delete).
- RBAC Subjects — The entities that can be bound to Roles in Kubernetes RBAC, including User accounts, ServiceAccounts, and Groups (with system: prefixes reserved for Kubernetes system groups).
- RDBMS-to-Elasticsearch mapping — Conceptual correspondence between relational database terminology and Elasticsearch: tables map to indices, rows to documents, columns to fields, schemas to mappings, and SQL to DSL (Domain Specific Language).
- React 19 + SWR real-time data fetching — Frontend architecture using React 19 with Vite 8, SWR for data fetching and caching, and WebSocket integration for automatic revalidation when backend signals file changes
- Reactor pattern — A design pattern for demultiplexing and dispatching multiple I/O events to their corresponding event handlers in a single-threaded or event-driven environment, commonly used in high-performance network applications.
- Reactor pattern with epoll — An I/O event notification mechanism used by Redis that employs the Reactor pattern with epoll for efficient, scalable network I/O handling in single-threaded environments.
- Reactor vs Proactor — Comparison between two asynchronous I/O design patterns: Reactor (synchronous event demultiplexing with synchronous handlers) and Proactor (asynchronous operation completion with true async I/O).
- README Templates Folder — An organizational component within documentation systems that stores pre-built README template files for consistent project scaffolding.
- Rebase conflict resolution — The process of handling merge conflicts during rebase operations by manually resolving conflicts, staging files with git add, and continuing the rebase with git rebase --continue.
- Rebase workflow with feature branches — A development pattern where feature branches are rebased onto the main branch before merging, ensuring the feature branch contains the latest base code and creating clean integration history.
- Recreate Deployment — A deployment strategy that completely shuts down the old version before launching the new version, causing service downtime during the deployment process.
- Redis (database) — An in-memory NoSQL database that supports persistence through RDB and AOF mechanisms, using single-threaded execution with Reactor epoll I/O event notification for high performance.
- Redis CLI access in Docker containers — Method for accessing the Redis command-line interface inside a running Docker container using docker exec with the redis-cli command
- Redis CLI basic commands — Essential redis-cli operations for database management: PING (connectivity), SET/GET (key-value storage), INCR (atomic increment), SELECT (database switching), DBsize, FlushDb, and Flushall
- Redis Connection Retry Pattern — Exponential backoff retry wrapper function for handling Redis connection errors and master failover scenarios gracefully
- Redis container port mapping — Exposing Redis inside Docker to the host system by mapping container port 6379 to host port 6379
- Redis CRUD Operations in Python — Basic create, read, update, and delete operations using redis-py library, including key-value storage and JSON serialization
- Redis data structures — Five fundamental data types supported by Redis: string, list, set, sorted set (zset), and hash
- Redis Data Structures and Operations — In-memory NoSQL database supporting persistence through RDB and AOF mechanisms, providing five fundamental data types (string, list, set, sorted set, hash) with single-threaded execution and Reactor epoll I/O for high performance.
- Redis Hash-Based Query Coordination — Coordination mechanism using Redis hashes with 10-minute TTL keys to track query progress across multiple database listeners, storing intermediate results and incrementing queryDoneCount for synchronization
- Redis installation from source — Step-by-step process for compiling and installing Redis from source code on Linux systems, including prerequisites (gcc compiler), downloading releases, and using make to build the server
- Redis kernel optimization warnings — Three critical Linux kernel parameters that must be configured for Redis performance: net.core.somaxconn (TCP backlog), vm.overcommit_memory (memory allocation), and Transparent Huge Pages (THP) disabling
- Redis performance characteristics — Redis achieves high speed through single-threaded in-memory operation, where CPU is not the bottleneck; instead, memory size and network bandwidth are the primary limiting factors, avoiding thread context switching overhead.
- Redis persistence mechanisms — Two durability approaches for persisting in-memory data to disk: RDB (snapshot-based) and AOF (append-only file logging)
- Redis process monitoring — Linux commands for verifying Redis server operation: ps filtering, netstat on port 6379, and lsof for port checking to confirm the Redis server is running
- Redis Sentinel Configuration — Configuration and connection setup for Redis high availability using Sentinel for automatic master-slave failover in Python applications
- Redis Sentinel with Go — Using Redis Sentinel for high availability and automatic failover in Go applications via the go-redis client library's failover client configuration.
- Redis use cases — Three primary application patterns: database (primary storage), caching (performance layer), and message broker (middleware)
- redis-crud-operations-with-json-serialization — Storing and retrieving Go structs in Redis by marshaling to JSON bytes with json.Marshal(), storing with Set(key, bytes), retrieving with Get(key), and unmarshaling back to structs with json.Unmarshal().
- redis-key-pattern-matching-with-keys-command — Using Redis KEYS command with wildcard pattern (*) to retrieve all keys in the database, enabling bulk retrieval operations where each key's value is fetched individually.
- redis-sentinel-failover-client-configuration — Redis Sentinel provides high availability through automatic failover. Go applications connect using NewFailoverClient with MasterName, SentinelAddrs (split from comma-separated string), and Password parameters.
- RedisConnection — The core connection abstraction in Spring Data Redis that represents a physical connection to a Redis server, used for low-level operations.
- RedisConnectionFactory — Factory pattern implementation for creating and managing Redis connection instances in Spring applications
- RedisTemplate — The central helper class in Spring Data Redis that simplifies Redis data access operations, providing methods for interacting with Redis data structures and supporting both key type and key bound operations.
- Reference application pattern — Using a complete, production-style example application to demonstrate framework best practices, implementation patterns, and real-world usage scenarios
- Reference counting — A memory management technique that tracks the number of references to an object, incrementing on retain and decrementing on release to determine when deallocation is safe
- ReferencePipeline — Abstract base class in Java Stream API representing an intermediate pipeline stage or source stage, providing the foundation for building stream operation chains.
- Referrer-Policy header — An HTTP header controlling how much referrer information is sent when users navigate between websites, with options ranging from no-referrer (maximum privacy) to unsafe-url (full information disclosure).
- Regional cryptographic standards — National or region-specific cryptographic algorithms including Russian GOST standards (GOST 28147-89, GOST R 34.11-94, GOST R 34.10-2001) and Chinese SM standards (SM2, SM3, SM4).
- RejectedExecutionHandlers — Netty's strategy components for handling tasks that cannot be executed by an EventExecutor when the executor is saturated or unavailable.
- Relationship graph — A visual representation of note connections that displays the network structure and relationships between different pieces of information in a knowledge base.
- Relationship graph visualization — A visual representation of note connections that displays the network structure and relationships between different pieces of information in a knowledge base.
- ReplicaSet — A Kubernetes API object that maintains a stable set of Pod replicas at a specified number, enabling horizontal scaling and rolling updates by managing Pod templates and replica counts.
- Report Generation State Machine — A three-state lifecycle (RUNNING → SUCCESS/FAIL) for tracking report generation status with database persistence and state transition validation.
- Report Query Locking by Admin User — Concurrency control mechanism that locks report generation requests by backend admin user ID to prevent duplicate processing and ensure process ownership
- Report Status State Machine — A three-state lifecycle model for asynchronous report processing with states RUNNING (processing), SUCCESS (completed), and FAIL (failed), ensuring reliable status tracking and preventing duplicate processing.
- Repository note stub pattern — A minimal documentation pattern consisting of metadata, title, tags, and cross-references to other notes, serving as a placeholder for future content expansion
- repository-dispatch-event — A GitHub Actions mechanism that allows workflows to trigger other workflows in the same repository through custom event types, enabling inter-workflow communication and orchestration.
- Request chaining with authentication tokens — The pattern of extracting authentication tokens from one API response (login) and automatically injecting them into subsequent requests for authenticated API calls.
- Requeueable Message Listener Pattern — A RabbitMQ consumer pattern that provides automatic message requeuing on processing failures with configurable exchange and queue bindings, improving reliability of message processing.
- Requeueable RabbitMQ Listener Pattern — A message listener wrapper that provides automatic requeuing capability with exchange and queue configuration for handling transient failures in message processing.
- Resource Timing API — W3C specification that extends performance monitoring capabilities to individual resources loaded by a page, providing detailed timing information for images, scripts, stylesheets, and other assets beyond the main page navigation timing.
- REST API credential management — The process of configuring authentication credentials for API access through interactive username/password setup using vmrest -C command, requiring password confirmation for successful credential updates.
- REST API testing tools comparison — Comparison of REST API testing approaches including VSCode REST Client for editor-integrated testing versus Postman for GUI-based testing with advanced scripting capabilities.
- REST Client (VSCode Extension) — A Visual Studio Code extension that allows developers to send HTTP requests and view responses directly within the editor using REST syntax in .http or .rest files.
- REST Client variable syntax — Syntax patterns for defining and using variables in REST Client requests, including environment variables (@name), request chaining ({{response.body.$.path}}), and variable substitution.
- RESTful report generation and download endpoint design — API pattern separating report generation initiation from file download, using async creation and retrieval by report type and ID
- RESTful resource representation — The principle that URIs represent resources with noun-based identifiers and that clients interact with resource representations through HTTP protocol semantics, not URI-based actions
- RESTful URI naming conventions — Rules for designing RESTful resource identifiers using plural nouns instead of verbs, mapping URIs to database table collections (e.g., /zoos, /zoos/ID/animals)
- Reverse index optimization — A database indexing technique where storing reversed strings allows efficient queries for suffix patterns (%keyword) when the LIKE operator is applied to reversed values.
- Reverse thinking in skill learning — A problem-solving approach that involves envisioning worst-case scenarios and failure modes to identify critical learning points and preventive measures, thereby uncovering aspects that might otherwise be overlooked.
- Roam-Highlighter — A Chrome browser extension that captures web page highlights in Roam Research format for easy pasting into note-taking applications like Obsidian
- Roam-Highlighter extension — Chrome extension for rapid data copying and highlighting, used alongside Obsidian for efficient content capture from web sources.
- Role vs ClusterRole — Two Kubernetes RBAC resources that define permission rules: Role is namespaced (restricted to a specific namespace), while ClusterRole is cluster-scoped and can grant permissions across all namespaces or to cluster-level resources.
- RoleBinding vs ClusterRoleBinding — Kubernetes RBAC resources that bind Roles or ClusterRoles to subjects (Users, ServiceAccounts, or Groups): RoleBinding grants permissions within a namespace, while ClusterRoleBinding grants cluster-wide permissions.
- Rolling file policies — Log4j2 policies including SizeBasedTriggeringPolicy for file rotation and DefaultRolloverStrategy for managing retained log file counts.
- Rolling Update (Ramped) Deployment — A zero-downtime deployment strategy that gradually replaces instances of the old version with the new version, configurable through parameters like max surge, max unavailable, and batch size.
- Root Certificate Authority — A top-level certificate authority that issues and signs digital certificates, typically trusted by operating systems and browsers as the foundation of the PKI trust chain
- Rootfs and container images — The root filesystem mounted at a container's root directory that provides the isolated execution environment, typically composed of multiple layered filesystems
- Rootless container execution — The security practice of running containers without root privileges, reducing the attack surface and potential system impact of containerized applications.
- Router web interface automation — Automated interaction with router administrative interfaces through HTTP requests, simulating user login and navigation to extract status information like current IP addresses or connection details.
- router-default-password-access — The practice of placing default network credentials on the physical hardware (typically on the back or bottom of routers) for initial setup access.
- rsync synchronization — File synchronization utility that copies only changed or different files between source and destination, preserving existing content without deletion
- SafeNet ProtectToolkit-J — Commercial cryptographic toolkit that extends standard JCA/JCE APIs with additional algorithms and parameter specifications, including detailed reference manuals.
- sagan (Spring reference application) — The official reference application and site implementation for spring.io, maintained by Spring team as a demo and documentation resource
- Scaffold-based content creation — Hexo's mechanism for using predefined template files from the scaffolds directory to generate new content with consistent structure.
- Scala development environment compatibility — The strict version dependencies between Scala compiler, JDK, and IDE versions, where even minor version differences can prevent successful compilation and require coordinated version management across the development toolchain.
- Scarcity-based marketing psychology — Marketing principle leveraging limited quantity and time constraints to trigger impulse buying behavior, where the perception of scarcity creates urgency and fear of missing out (FOMO) among consumers.
- Scoop — A command-line package manager for Windows that operates without administrator privileges by installing applications to the user's home directory, leveraging PowerShell for installation and management.
- Scoop buckets — Repository collections in Scoop that group related software packages, allowing users to add custom sources (like the java bucket) beyond the default repository to access specialized applications.
- Scoop package manager — A command-line package manager for Windows designed for installing software from the terminal, with a focus on developer tools and command-line applications.
- Scoop shim mechanism — Scoop's path management technique using shim files in ~\scoop\shims that link to installed applications, eliminating the need to modify system PATH variables
- Scoop version switching — The ability to manage multiple versions of the same software using 'scoop reset' to change which version is currently active, demonstrated with switching between OpenJDK versions
- SDKMAN — A parallel version manager for software development kits that allows developers to manage multiple versions of Java, Gradle, Maven, and other JVM-based tools.
- Search parameter deduplication via hashing — Preventing duplicate report generation by storing MD5 hashes of search parameters and checking for recent identical requests
- Search Parameter Hash Deduplication — Technique for preventing duplicate report generation within a time window by hashing search parameters (MD5) and checking recent requests before processing.
- Secret mounting methods — Three approaches to consume Kubernetes Secrets in applications: as environment variables, as files mounted to a specific path in the Pod, or as docker-registry credentials for pulling private images.
- Secret Volume — Similar to ConfigMap but designed for sensitive data like passwords, certificates, and credentials. Data is base64-encoded. Provides specialized security features beyond standard configuration storage.
- Secret-based password retrieval in Kubernetes — A pattern for extracting sensitive data from Kubernetes secrets using kubectl with jsonpath and base64 decoding, as applied to ArgoCD's initial admin password.
- Security context-based validation pattern — Extracting authorization logic into dedicated validator classes that access Spring Security's SecurityContextHolder to check user authorities against business rules
- Select I/O model — A synchronous I/O multiplexing mechanism that allows a single process to monitor multiple file descriptors to see if any are ready for I/O operations.
- Select/Epoll I/O multiplexing — Operating system mechanisms for monitoring multiple file descriptors to determine which are ready for I/O operations, enabling efficient handling of many concurrent connections.
- SelectionKey Ready Operations Bitmask — Java NIO SelectionKey uses bit operations (OP_READ=1, OP_WRITE=4, OP_CONNECT=8, OP_ACCEPT=16) to track which I/O operations a channel is ready for, with readyOps() initialized to zero and updated during selection.
- Selenium screenshot capture — Capability to capture browser screenshots using TakesScreenshot interface with element-specific cropping via BufferedImage operations
- Selenium tab management — Technique for handling multiple browser tabs using getWindowHandles() and switchTo().window() to control different tab contexts
- Selenium WebDriver Java API — Java classes and interfaces for browser automation including RemoteWebDriver, ChromeDriver, JavascriptExecutor, and TakesScreenshot
- Self-hosted Git service — A version control hosting solution that organizations or individuals deploy and manage on their own infrastructure rather than using cloud-based services like GitHub or GitLab.
- Self-Signed Certificate — A type of digital certificate signed by the same entity whose identity it certifies, requiring manual trust configuration in browsers and applications
- Semantic token compression — Token reduction strategy that leverages LLMs' ability to reliably reconstruct syntax and structure, allowing removal of predictable language elements while retaining information-dense content that cannot be predicted.
- Sequential Flow Pattern (Claude Code) — The most basic Claude Code workflow where tasks execute sequentially in a single terminal with accumulating context, suitable for dependent tasks but limited by context window growth and eventual 'context rot'.
- SerialExecutor Pattern — A concurrency pattern that serializes multiple Runnable tasks through a queue while delegating actual execution to an underlying executor, ensuring tasks execute sequentially while maintaining thread safety.
- Server-side image validation — Image dimension and format validation on the server using BufferedImage and ImageIO to verify width/height constraints and MIME type detection from byte arrays
- ServerBootstrapAcceptor — A special ChannelHandler added during server channel initialization that accepts incoming connections and configures child channels with the specified childGroup, childHandler, childOptions, and childAttrs.
- Service Account Secret — Automatically created by Kubernetes and mounted to Pods at /run/secret/kubernetes.io/serviceaccount, containing tokens for authenticating with the Kubernetes API.
- Service account-based authorization — Authorization mechanism that validates requests based on the source workload's service account identity, with configurable allowed service account values via command-line flags.
- Service discovery and DNS architecture in Kubernetes — DNS infrastructure for K8S including Bind9 deployment for internal domain resolution (host.com for infrastructure, od.com for business domains), enabling container hostname binding and service discovery across the cluster.
- Service environment configuration — The practice of configuring containerized services through environment variables to set runtime parameters like database credentials, usernames, and passwords within Docker Compose definitions.
- Service mesh — A dedicated infrastructure layer that handles service-to-service communication in microservices architectures, providing features like traffic management, security, observability, and reliability without requiring changes to application code.
- Service Mesh Architecture — Dedicated infrastructure layer for handling service-to-service communication in microservices architectures, providing traffic management, security policies, observability, and reliability patterns through sidecar proxies like Envoy.
- Service Mesh Distributed Tracing — The practice of tracking requests across multiple services in distributed systems to troubleshoot latency problems, analyze service dependencies, and perform root cause analysis using tools like Jaeger or Zipkin.
- Service mesh ingress configuration — The pattern of configuring external traffic access to services within a mesh through coordinated Gateway and VirtualService resources, enabling protocol-aware routing for technologies like WebSockets.
- Service Mesh Metrics and Monitoring — The use of time-series databases like Prometheus to collect and track health metrics for Istio control plane components and applications within the service mesh, enabling visualization through dashboards and performance analysis.
- Service port mapping — Kubernetes Service configuration involving three port types: port (Cluster IP service port), targetPort (Pod container port), and nodePort (Node access port), enabling flexible traffic routing from external access to container endpoints.
- Service Provider Interface (SPI) pattern — Design pattern enabling third-party implementations to be discovered and loaded at runtime without tight coupling between API and implementation, commonly used in JDBC, JPA, and other Java frameworks
- Service selector-based traffic switching — Technique for controlling Kubernetes deployment traffic by updating Service selector labels to route between different deployment versions without modifying pods
- Service Without Selector — A Kubernetes Service configuration that omits the selector field, preventing automatic pod endpoint discovery and allowing manual Endpoint objects to be created for routing to external addresses or static IPs
- Service-Oriented Architecture (SOA) — A distributed computing philosophy emphasizing service encapsulation, loose coupling, service contracts, registry/discovery, and interface-implementation independence with two implementation approaches: centralized and decentralized.
- ServiceAccount token authentication — Authentication method for Kubernetes where ServiceAccounts have associated Secret objects containing JWT bearer tokens. Tokens can be retrieved using kubectl commands and decoded with base64 to obtain the authentication credential for API access.
- ServiceAccount token extraction from Kubernetes Secret — Process of retrieving authentication tokens from Kubernetes Secrets associated with ServiceAccounts using kubectl describe secret commands and awk parsing.
- ServiceAccount token secret creation — Kubernetes Secret resource of type kubernetes.io/service-account-token that associates with a ServiceAccount to provide authentication tokens for API access.
- ServiceLoader — Java's built-in service-provider loading facility (java.util.ServiceLoader) that discovers and loads implementation classes at runtime using the META-INF/services/ directory convention
- ServletContainerInitializer — Servlet 3.0 interface that allows third-party libraries to programmatically register components (servlets, filters, listeners) during web container startup, replacing web.xml configuration.
- setParent() method behavior — The ApplicationContext.setParent() method establishes parent-child relationships and automatically merges parent environment settings into the child context when the parent's environment is a ConfigurableEnvironment instance.
- setParent() method with environment merging — The setParent() method not only establishes parent-child relationships but also merges the parent's ConfigurableEnvironment with the child context's environment when the parent is non-null.
- SHA256增量缓存 — 基于文件SHA256哈希值的缓存机制,只处理变更文件,实现知识图谱的增量更新,避免重复处理未修改内容
- Shadow Deployment — A high-resource strategy where new version runs alongside old version receiving mirrored real production traffic for performance testing without affecting users, requiring careful handling to prevent unintended side effects like duplicate transactions.
- Shell environment persistence configuration — Configuration files and methods for persisting shell environment variables and custom functions across sessions, including .bash_profile for Git Bash and Profile.ps1 for PowerShell 7.
- Shell job control — Techniques for managing background and foreground processes including ampersand execution, jobs viewing, and bg/fg process control commands
- Shutdown hooks in Java — Threads registered via Runtime.addShutdownHook() that execute when the JVM terminates normally, allowing cleanup operations; they can be removed and bypassed by Runtime.halt().
- Sidecar container pattern — A container architecture pattern where a helper container runs alongside the main application container in the same Pod, sharing volumes and network namespace for tasks like logging, monitoring, or proxying.
- Sidecar container pattern for monitoring — Deployment architecture where monitoring agents (Filebeat) run as containers alongside application containers in the same Pod, sharing emptyDir or hostPath volumes to access application logs and metrics.
- sidecar injection for sample applications — Istio sample services require automatic or manual sidecar injection to enable mesh functionality, with configuration depending on the injection mode enabled in the cluster.
- sidecar logging container patterns — Two sidecar-based logging approaches for applications that write to files: redirecting file output to sidecar stdout/stderr for agent consumption, or directly sending logs to remote storage from sidecar container.
- sidecar logging pattern with FileBeat — Container deployment pattern where FileBeat runs alongside application containers in the same Pod, sharing emptyDir volumes to collect and forward logs to Kafka, using environment-based topic naming (logm-PROJ_NAME, logu-PROJ_NAME)
- sidecar logging patterns — Two patterns for handling file-based logs: redirecting sidecar that forwards file contents to stdout/stderr, and streaming sidecar that sends logs directly to remote storage, each with different resource and visibility trade-offs.
- Signal Detection and Brain-Ops Pattern — Two foundational agent skills where signal-detector captures original ideas and entity mentions from every message in parallel, while brain-ops ensures all external API calls first query the knowledge base in a read-enrich-write cycle.
- Simple sleep service — A minimal Ubuntu container with curl used as a request source for testing Istio networking and service mesh functionality by executing commands from within the pod.
- Simplicity First principle — Write minimal code to solve the problem at hand without speculative abstraction, framework building, or over-engineering for the sake of demonstrating skill.
- six-channel parallel collection — Multi-agent information gathering strategy where 6 agents simultaneously collect from books, podcasts/interviews, social media, critics' perspectives, decision records, and life timeline to build comprehensive subject understanding.
- Skaffold — Google's open-source tool for automated Kubernetes application development workflow, handling container image building and local cluster deployment
- skaffold dev command — Skaffold development mode that enables continuous local development with automatic rebuilding and redeployment upon code changes
- skaffold-modules-and-profiles — Skaffold's organizational structure that allows modular deployment configurations (istiod, ingress, kiali, bookinfo) to be composed together via command-line flags.
- skaffold-run-command — The Skaffold command for production deployment that pulls manifests from remote charts rather than the current branch, used for staged module deployment.
- Skill acquisition pyramid — Three-stage framework consisting of skill learning (preparation/research), skill acquisition (practice and learning new movements), and skill training (repetitive deliberate practice for improvement).
- Skill deconstruction — The practice of breaking down complex skills into smaller, manageable sub-skills and steps to identify the most critical components for focused learning.
- Skill Self-Creation and Self-Improvement — Mechanism where complex tasks automatically generate skills after completion, with skills being patched and refined based on usage feedback rather than being pre-programmed
- Skills reuse system — Capability framework where successful solutions become reusable team assets, allowing deployment, migration, and code review abilities to compound over time through task completion.
- skills.sh compatibility — The Claude Code Skills framework standard enabling interoperable skill distribution through npm packages, with nuwa-skill implementing a meta-skill pattern that can generate new skills following skills.sh conventions.
- Slices vs arrays in Go — Arrays are fixed-size collections of variables of the same type, while slices are dynamically-sized views into arrays that can grow using the append() function.
- SMTP commands — Core set of instructions (HELO, EHLO, STARTTLS, MAIL FROM, RCPT TO, DATA, QUIT) used in SMTP protocol for client-server communication and email transmission
- SMTP envelope vs message — Distinction between envelope senders/recipients (specified by MAIL FROM and RCPT TO commands) and message headers (contained in the email content after DATA command), where envelope and message addresses can differ
- SMTP ENVELOPE vs Message Headers — Distinction between envelope senders/recipients specified by MAIL FROM and RCPT TO commands versus message headers; envelope addresses can differ from message header addresses and control actual message routing.
- Socket programming — A BSD UNIX API that enables network application programmers to focus on application-layer logic without handling transport and network layer details directly.
- Song Hongkang MySQL Course — A comprehensive MySQL database tutorial course covering topics from beginner to advanced levels, available on YouTube and Bilibili platforms
- Spec 驱动开发 — 在编码前强制澄清规格定义:细化想法、明确需求边界、验收标准,将 spec 作为后续所有阶段(plan、build、test、review)的契约基础。
- Speculative decoding in MLX Engine — Performance optimization using a smaller draft model to generate candidate tokens that the main model validates in parallel, with compatibility checks and draft model cache merging support.
- SpEL Expression Delimiters and Syntax — SpEL uses #{...} as the default expression delimiter, distinguishes between SpEL expressions and property placeholders ${...}, supports ParserContext for custom delimiters, and T() operator for class references.
- SpEL expression types — Categories of SpEL expressions including literal values, mathematical operators, relational operators, logical operators, ternary operators, and collection expressions for List and Map definitions.
- SpEL Parser and Evaluation Context — The core SpEL parsing mechanism using SpelExpressionParser, parseExpression(), and EvaluationContext (StandardEvaluationContext) to parse expressions, configure variables, and resolve values against object graphs.
- SpEL Variable and Method Operations — Techniques for passing variables to SpEL expressions, setting root objects, calling static methods via T(), invoking instance methods, and accessing implementation class properties and methods dynamically.
- SPIFFE — Specification framework for providing verifiable workload identity through SPIFFE Verifiable Identity Documents (SVIDs), used by SPIRE to issue cryptographically secure identities.
- SPIFFE trust domain workload certificates — X.509 certificates issued to workloads with SPIFFE (SPIFFE Platform Identity Framework for Everyone) URI Subject Alternative Names encoding identity as spiffe://trust-domain/namespace/service-account format.
- Spinlock pattern — A busy-waiting synchronization technique where a thread repeatedly checks in a loop for a condition to become true, avoiding the overhead of thread suspension and resumption
- Spinnaker architecture — Microservices-based deployment automation platform consisting of specialized components: Deck (UI), Gate (API gateway), Orca (orchestration), Clouddriver (cloud infrastructure), Front50 (persistence), Echo (messaging), and Igor (CI integration).
- Spinnaker continuous deployment platform — A multi-cloud continuous delivery platform for deploying applications with flexible pipelines and global environment visibility, using microservices architecture including Deck, Gate, Orca, Clouddriver, Igor, Echo, and Front50 components.
- SPIRE — The SPIFFE Runtime Environment that implements the SPIFFE specifications to provide secure identity issuance for workloads in distributed systems, used in this context as a Certificate Authority integrated with Envoy's SDS API.
- Split & Merge Pattern (Claude Code) — A hub-and-spoke workflow where Claude automatically divides tasks among up to 10 parallel subagents with independent contexts, then merges results through the main agent, enabling efficient fan-out/fan-in execution within a single session.
- Spliterator — A Java 8 interface for traversing and partitioning elements, designed to support parallel stream operations with characteristic flags that describe data properties.
- Spliterator characteristics — A set of bit-flag constants (ORDERED, DISTINCT, SORTED, SIZED, NONNULL, IMMUTABLE, CONCURRENT, SUBSIZED) that describe the structural and behavioral properties of a data source to optimize stream processing.
- Spliterators utility class — Final utility class in java.util providing static factory methods for creating and working with Spliterator instances, complementing the Spliterator interface.
- Spring @Value with SpEL Integration — Integration of SpEL expressions with Spring's @Value annotation for dependency injection, enabling dynamic bean property resolution and referencing other beans like @Value("#{tutorial.topicsList[0]}").
- Spring ApplicationContext hierarchy — The parent-child relationship structure in Spring where child contexts can access beans from parent contexts, but parents cannot access child beans, enabling layered application architectures.
- Spring Aware Interfaces — Interfaces that allow beans to access Spring infrastructure resources (ApplicationContext, BeanFactory, Environment, etc.) through callback injection during initialization
- Spring Batch — A lightweight framework for batch processing applications in Spring, providing reusable components for reading, processing, and writing large volumes of data with transaction management and job processing capabilities.
- Spring Beans and Dependency Injection — Spring's core dependency injection pattern for managing and wiring application components through Spring Beans.
- Spring Boot — A Spring project that provides rapid application development and convention-over-configuration setup for Spring-based applications, referenced as having detailed documentation in a linked file.
- Spring Boot Actuator — A Spring Boot module providing production-ready monitoring features, metrics, and management endpoints for applications.
- Spring Boot auto-configuration — Spring Boot's convention-over-configuration mechanism that automatically configures application components based on dependencies present on the classpath
- Spring Boot build systems — Build tool options for Spring Boot projects supporting dependency management and application packaging, including Ant, Maven, and Gradle
- Spring Boot caching integration — Practical implementation patterns for integrating caching solutions (Redis, Caffeine) into Spring Boot applications to reduce database load and improve response times
- Spring Boot conditional annotations — Family of @Conditional* annotations (@ConditionalOnClass, @ConditionalOnMissingBean, @ConditionalOnExpression, etc.) that control bean creation based on environment, classpath, or SpEL expressions.
- Spring Boot cookie handling — Technical documentation for managing cookies using Spring Boot with the Servlet API
- Spring Boot Developer Tools — Development productivity features in Spring Boot including automatic restart, live reload, and fat jar packaging for streamlined development workflow
- Spring Boot development workflow — The systematic process and stages involved in developing SpringBoot applications, from project initialization through implementation and deployment.
- Spring Boot Elasticsearch auto-configuration — Spring Boot provides automatic configuration classes for integrating with Elasticsearch through ElasticsearchAutoConfiguration and JestAutoConfiguration
- Spring Boot FailureAnalyzer — Diagnostic interface for analyzing startup failures and providing user-friendly error messages; implementations registered via META-INF/spring.factories can access BeanFactory or Environment.
- Spring Boot learning roadmap — A structured learning path covering Spring framework and SpringBoot2, including core technologies and reactive programming, tracked through documentation notes.
- Spring Boot static resource locations — Default classpath locations where Spring Boot serves static resources (META-INF/resources/, resources/, static/, public/) and the WebMvcAutoConfiguration that handles them.
- Spring Boot testing slices — Specialized test annotations like @DataJpaTest, @DataElasticsearchTest, and @DataJdbcTest that auto-configure only relevant beans for testing specific layers of an application.
- Spring Boot welcome page handling — Automatic detection and serving of welcome pages (index.html) through WelcomePageHandlerMapping in Spring MVC's auto-configuration.
- Spring Boot-Elasticsearch version compatibility — Version compatibility between Spring Boot and Elasticsearch requires careful attention as different versions often have incompatibility issues
- Spring Cloud — A set of tools and frameworks for building distributed systems and microservices, providing patterns for configuration management, service discovery, circuit breakers, and distributed tracing.
- Spring Component Registration Mechanisms — Various ways to register components in Spring including package scanning, @Bean annotation, @Import with ImportSelector and ImportBeanDefinitionRegistrar, and FactoryBean interface
- Spring Data — A family of projects that provides consistent data access patterns for relational and non-relational databases, reducing boilerplate code and simplifying repository implementation across multiple data storage technologies.
- Spring Data examples repository — Community-maintained GitHub repository containing example projects demonstrating Spring Data implementations and patterns
- Spring Data JPA Query DSL deprecation — The deprecation of Query DSL support in Spring Data 2.2, indicating removal or discontinuation of this query building feature.
- Spring Data JPA XML configuration — Traditional XML-based configuration approach for setting up Spring Data JPA with data source, EntityManagerFactory, transaction management, and repository scanning
- Spring Data Redis — Spring Framework module for integrating Redis as a data store, providing connection management and high-level abstraction templates
- Spring ecosystem — The comprehensive collection of Spring framework projects and modules covering various aspects of enterprise Java development from web services to security to data processing.
- Spring ecosystem documentation structure — Organizational framework for Spring Framework and SpringBoot documentation, including core technologies, development workflows, and project-specific references.
- Spring Ecosystem Framework — Comprehensive Java framework collection including Spring Boot for rapid development, Spring Data for data access, Spring Security for authentication, and extensive auto-configuration capabilities for modern enterprise applications.
- Spring ecosystem resources — Official Spring project repositories and reference materials available through GitHub and spring.io for developers learning and implementing Spring-based solutions
- Spring event publishing and listening — Observer pattern implementation in Spring using ApplicationEventPublisher for broadcasting events and ApplicationListener for handling them
- Spring Expression Language (SpEL) — A powerful expression language for the Spring framework that supports querying and manipulating object graphs at runtime, with syntax including #{ } delimiters, literal expressions, mathematical operations, and method invocation.
- Spring framework documentation structure — Hierarchical organization of Spring and Spring Boot learning materials and reference documentation into a centralized navigation map.
- Spring Framework learning resources — Curated collection of official Spring documentation, guides, example projects, and community translations for Spring Boot and Spring Data
- Spring framework learning roadmap — A structured learning progression through Spring ecosystem technologies, organized as sequential documentation tasks covering Spring framework, SpringBoot, and advanced topics like reactive programming.
- Spring JPA transaction management — Configuration of JpaTransactionManager with annotation-driven transaction support using @Transactional and @Rollback for testing
- Spring Loaded agent — A JVM agent that enables hot class reloading by detecting and loading modified class files during application runtime, eliminating the need for server restarts during development.
- Spring MVC auto-configuration — Automatic setup of Spring MVC components including ContentNegotiatingViewResolver, HttpMessageConverters, static resource handling, and Formatters, which can be customized or overridden.
- Spring MVC strategy interfaces — Collection of pluggable strategy components (HandlerMapping, HandlerAdapter, ViewResolver, etc.) that allow DispatcherServlet's request processing workflow to be customized and extended.
- Spring Profile Environment Switching — SpringBoot's configuration mechanism for managing different environment-specific settings (development, testing, production) through profile activation and configuration property precedence
- Spring REST Docs — A Spring project for documenting RESTful services, highlighted in the source with a direct link and categorized under devops documentation practices.
- Spring Sagan — The Spring Framework's official website and community platform, providing documentation, guides, and resources for Spring ecosystem developers.
- Spring Sagan reference application — The Spring Framework's official website and community platform, providing documentation, guides, and resources for Spring ecosystem developers.
- Spring Security — Spring's comprehensive security and access-control framework for Java applications, included in the list of core Spring projects.
- Spring Test context configuration — Setting up JUnit tests with Spring Test framework using @ContextConfiguration and SpringRunner for integration testing with full application context
- Spring XXXAware vs XXXProcessor pattern — Distinction between Aware interfaces for bean callbacks and Processor interfaces for container lifecycle hooks, including initialization order and invocation timing
- SpringBoot Actuator — Production-ready monitoring and management feature set providing endpoints for health checks, metrics collection, and application observability with integration to monitoring systems like Micrometer
- SpringBoot Auto-Configuration — SpringBoot's mechanism for automatically configuring beans and components based on classpath dependencies, eliminating manual XML configuration through intelligent defaults and conditional registration
- SpringBoot Conditional Bean Registration — The @Conditional annotation system for registering beans only when specific conditions are met, forming the basis of auto-configuration and feature toggling
- SpringBoot development workflow — The systematic process and stages involved in developing SpringBoot applications, from project initialization through implementation and deployment.
- SpringBoot Request Parameter Annotations — Collection of annotations for binding HTTP request parameters to method parameters including @RequestAttribute, @MatrixVariable, and custom parameter binding through WebDataBinder
- Springfox — A library that integrates Swagger into Spring Boot applications, automatically generating API documentation from Spring controllers and annotations.
- Springfox Swagger2 — A Spring framework integration library that automates API documentation generation by combining Springfox with Swagger2 specification, commonly used in Java Spring Boot applications for REST API documentation.
- SpringServletContainerInitializer — Spring's implementation of ServletContainerInitializer that handles WebApplicationInitializer implementations and bootstraps Spring web applications without web.xml.
- Spring实战翻译项目 (Spring in Action Chinese translation) — Community-driven Chinese translation projects for Spring实战 editions, providing localized learning materials
- SQL duplicate prevention pattern — A SQL pattern using NOT IN subqueries with UNION ALL to merge datasets while excluding records that already exist in the target table, commonly used during incremental data synchronization.
- SQL LIKE pattern matching and indexing — How SQL LIKE pattern matching interacts with database indexes: prefix patterns (keyword%) use indexes, while infix (%keyword%) and suffix patterns (%keyword) typically cannot.
- SSH key generation — Creating SSH public/private key pairs using the ssh-keygen command with specific parameters for GitHub compatibility.
- SSH key generation command — The complete ssh-keygen command syntax for generating GitHub-compatible RSA keys:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com" - SSH key storage locations — The standard file system paths where SSH keys are stored (.ssh directory) across different operating systems (Windows shown).
- SSH key-based authentication in containers — The method of configuring SSH access without password authentication by copying authorized_keys files into the container's .ssh directory and setting appropriate file permissions.
- SSH passwordless authentication — Setup process for enabling SSH login without passwords by generating RSA key pairs with ssh-keygen and distributing public keys using ssh-copy-id
- SSL Certificate File Formats — Different encoding formats for SSL certificates and keys including PEM (Base64 ASCII), DER (binary), P7B/PKCS#7 (certificate chains without private keys), and PFX/P12/PKCS#12 (encrypted containers with certificates and private keys).
- SSL Certificate Setup for Dashboard — Process of generating and configuring OpenSSL certificates for secure HTTPS access to Kubernetes Dashboard, including certificate creation, Nginx SSL configuration, and certificate installation on ingress controllers.
- SSL Certificate Verification and Trust Issues — Common TLS/SSL connection problems like the 'Unable to Get Local Issuer Certificate' error that occurs when the client cannot verify the server certificate chain, resolvable by using --cacert to specify trusted certificates or --insecure to bypass verification.
- SSL For Free — A free SSL certificate provider that enables HTTPS security for domains and subdomains, including wildcard certificate support for subdomains like *.yudady.tk.
- SSL termination in Kubernetes Ingress — Security practice where Ingress handles HTTPS/TLS decryption at the edge, forwarding unencrypted traffic to backend Services and Pods, reducing computational overhead on application containers.
- SSL/TLS Certificate — Digital credentials that bind a public key to an entity's identity, issued by Certificate Authorities (CAs) to enable secure, authenticated communication through HTTPS and other encrypted protocols.
- SSL/TLS Certificate Management — PKI infrastructure for securing communications including certificate authority hierarchy, OpenSSL certificate generation, Let's Encrypt automation, certificate formats (PEM, DER, PKCS#12), and integration with Kubernetes ingress controllers.
- SSL/TLS protocol inspection — Command-line techniques using OpenSSL (s_client) and cURL to analyze and debug secure server connections, including checking supported cipher suites and protocol versions.
- STARTTLS command — SMTP extension command that clients use after EHLO to request TLS/SSL encryption for secure email transmission, requiring the server to advertise STARTTLS support in its capabilities list
- Startup note configuration — The practice of setting a designated entry point or landing page in note-taking applications to establish a consistent starting context for each work session.
- State externalization principle — The architectural practice of storing application data in external systems like databases instead of local files or memory to prevent data loss during application crashes and enable multi-instance scaling.
- Stateful vs Stateless services — Design philosophy where Kubernetes excels at stateless microservices with externalized persistent data services accessed via API, rather than embedding storage within the application layer.
- Stateless session configuration for REST APIs — Disabling CSRF protection and configuring NEVER session creation policy to create stateless, token-based REST API security configurations
- StatelessOp — Base class for stateless intermediate stages in Java Streams, extending ReferencePipeline to represent operations that don't maintain state across elements (like filter, map).
- Static method reference pattern (類名-靜態方法名) — Using double colon syntax to pass static methods as higher-order function parameters (e.g., BeanDao::currentSql), separating business logic from execution context
- Static site deployment workflow — Standard development cycle for static sites combining content generation (hexo generate), local testing (hexo server with -w --debug flags), and Git-based deployment to hosting platforms.
- Static vs dynamic PV provisioning — Two approaches to PV creation: static (pre-provisioned by administrators before PVC claims) vs dynamic (automatically created on-demand based on StorageClass when PVC requests storage).
- Static YAML deployment — Alternative deployment method using
kubectl apply -fwith raw YAML files from GitHub repositories, providing direct manifest-based installation without package managers. - Stencil-based content extraction pattern — A pattern for extracting specific content from web pages using browser extensions that support customizable templates, focusing on practical capture and transfer workflows.
- Storage access modes — Volume mounting permissions: ReadWriteOnce (single node R/W), ReadOnlyMany (multiple nodes read-only), ReadWriteMany (multiple nodes R/W), and ReadWriteOncePod (single Pod R/W), critical for multi-node clusters.
- StorageClass — Kubernetes API object that defines templates for creating PersistentVolumes, specifying PV properties (size, type) and the storage provisioner plugin (like Ceph Rook) to use
- StorageClass and dynamic provisioning — StorageClass defines PV provisioners for automatic volume creation; PVCs can trigger dynamic provisioning when StorageClass is specified or use default class, with empty string disabling dynamic provisioning.
- Storm Cluster Configuration — Configuration management through storm.yaml file specifying ZooKeeper servers, Nimbus seeds, local directory, and supervisor slot ports for cluster deployment
- Storm Data Pipeline Integration — Integration patterns with external systems using Flume for data acquisition, Kafka for temporary buffering, and Redis for in-memory data storage within stream processing workflows
- Storm parallelism and execution model — Storm's distributed execution hierarchy comprising Worker processes (JVMs), Executors (threads), and Tasks, which can be configured via topology config to control concurrency levels.
- Storm programming model — The Storm application development framework using IRichSpout interfaces for data ingestion and IRichBolt interfaces for data processing, connected via Stream Grouping to form distributed topologies.
- Storm Stream Grouping — Data flow routing mechanism that defines how tuples stream from spouts to bolts or between bolts, determining distribution strategies for parallel processing
- Storm Topology Deployment — Deployment methods for submitting Storm topologies in local mode (LocalCluster) for development or remote/cluster mode for production using storm jar command
- Strategy Pattern for Document Generation — An object-oriented design pattern where different document format implementations (CSV, PDF) implement a common interface, allowing runtime selection of appropriate document generation strategies based on report type.
- Strategy Pattern for Report Document Types — Design approach using polymorphic service implementations with an accept() method to support multiple document output formats (CSV, PDF) that can be extended without modifying core logic.
- Stream Collectors — Utility class providing static methods for common reduction operations and mutable reduction patterns in the Stream API, enabling aggregation of stream elements.
- Stream intermediate operations — Operations that return a new stream object (such as peek() and sequential()), allowing operation chaining and lazy evaluation until a terminal operation is invoked.
- Stream resource management with AutoCloseable — Streams implement AutoCloseable to support try-with-resources pattern and onClose handlers for cleanup operations, including exception handling behavior.
- Striped64 — A concurrent utility class added in Java 8 that provides lock-free, thread-safe operations for high-performance counters and accumulators under concurrent access.
- Strong encapsulation — Module system capability that controls which packages and types are accessible to other modules, preventing unauthorized access to internal APIs
- Sub-Agent Tool Isolation — Security boundary preventing sub-agents from accessing dangerous tools (delegate_task for leaf agents, clarify, memory, send_message, execute_code)—orchestrators regain delegate_task while remaining blocked from user interaction and cross-platform side effects.
- SubEtha SMTP — Java library for implementing SMTP mail receiving servers, providing programmable SMTP server functionality for Java applications to process incoming email messages
- Subject Alternative Name (SAN) configuration — X.509 extension allowing certificates to specify additional valid domain names beyond the Common Name, configured via subjectAltName parameter in OpenSSL configuration files.
- Supplier
interface — A Java functional interface representing a supplier of results with no input arguments (nilary function), used for lazy evaluation or providing values on demand. - Surgical Changes principle — Only modify code directly related to the assigned task without refactoring adjacent functions, rewriting unrelated comments, or cleaning up code outside the immediate scope.
- SVID verification workflow — Process of validating workload identity by extracting and inspecting SPIFFE Verifiable Identity Documents (SVIDs) using istioctl proxy-config secret and openssl commands to confirm certificate issuance authority.
- Swagger2 specification — A popular open-source framework for describing and documenting RESTful APIs, providing a standard interface for API metadata and interactive documentation.
- Switch-based command routing pattern — A control flow pattern using switch statements on os.Args[1] to route execution to different handler functions based on the subcommand provided, enabling clean CLI command dispatch.
- Symmetric encryption algorithms — Shared-key ciphers for data encryption including AES, ChaCha20-Poly1305, Blowfish, CAST-128, various DES variants (TDES, IDEA), and regional standards like GOST 28147-89 and SM4.
- Synchronized vs ReentrantLock — Comparison between Java's built-in synchronized keyword and the ReentrantLock class for thread synchronization and mutual exclusion.
- Synchronous and Asynchronous I/O — Communication mechanism distinction between synchronous I/O (caller actively waits for results via blocking or polling) and asynchronous I/O (OS handles I/O and notifies via callbacks or events).
- Tab-Copy — A Chrome extension utility for managing and copying browser tabs, part of a developer tools collection for enhanced browser workflow efficiency.
- TabCopy — A Chrome browser extension that enables bulk copying of tab information in customizable formats including Markdown, supporting both Simple Mode and Fancy Mode with flexible output templates.
- TabCopy configuration syntax — Template syntax for customizing TabCopy output format using placeholders like [title] and [url] with Markdown link formatting
- TabCopy extension — Chrome browser extension for quickly copying URLs, integrated into the Obsidian workflow for efficient link management.
- TabCopy Fancy Mode — An advanced operation mode in TabCopy that supports up to three customizable format configurations, including Custom templates, for more sophisticated tab information extraction workflows.
- TabCopy modes — TabCopy offers two operational modes: Simple Mode for basic functionality and Fancy Mode which supports up to three custom format configurations with advanced customization options.
- TabCopy placeholder syntax — TabCopy uses bracket-based placeholders like [title], [url], and [link] as template variables that dynamically extract and format webpage metadata when copying tab information.
- TabCopy Simple Mode — A basic operation mode in TabCopy that provides straightforward template-based copying of tab data using placeholder variables like [title], [url], and [link] to extract webpage information.
- Tag-based Organization — A categorization method using metadata tags to increase note discoverability and contextual connections, often combined with backlinks and relationship graphs for enhanced navigation.
- TCP Echo Server — A simple network service that receives TCP connections and echoes back received data with a configurable prefix, commonly used for testing network connectivity and service mesh functionality.
- TCP echo service testing with netcat (nc) — A testing methodology using netcat within a container (busybox) to send data over TCP and verify the server's echo response, including any prepended prefixes.
- TDD 驱动增量实现 — 在 AI 代理工作流中强制测试先行:测试作为规格证明而非事后补充,通过小切片增量实现并持续验证,避免'跑通就算完成'的草率做法。
- Tech creator ecosystem — Network of technical content creators across YouTube, GitLab, GitHub, and personal blogs providing DevOps and development tutorials, forming a personalized learning curriculum.
- Tech Creator Resource Curation — Systematically following and organizing content from technical educators and content creators across multiple platforms (YouTube, GitHub, personal blogs) as part of a personalized learning ecosystem.
- Technical creator curation — The practice of systematically following and organizing content from technical educators, YouTubers, and bloggers as part of a personalized learning ecosystem and resource collection.
- Technology Maturity Assessment — A prerequisite evaluation step in the learning process that determines whether a technology has reached sufficient stability and community support before investing learning time.
- Telnet SMTP verification — Manual testing technique using telnet to connect to SMTP servers (typically port 25) and interactively execute SMTP commands to verify server functionality and configuration
- Template metadata management — The practice of tracking structured metadata fields such as created_date, updated_date, aliases, and tags within template files to enable organization and traceability
- Template-driven content selection — The practice of using predefined template variables to selectively extract and format specific elements from web pages during content capture operations.
- Templater (Obsidian Plugin) — An Obsidian plugin for advanced template functionality, extending beyond basic templating to enable dynamic note creation and automation through template scripts.
- Templater plugin — An Obsidian plugin for advanced template functionality, enabling dynamic note creation and automation through template scripts
- Templater plugin integration — The capability of certain Obsidian plugins to integrate with the Templater plugin, enabling dynamic content generation and template processing within plugin functionality.
- Temporal documentation tagging — The practice of adding time-based tags (e.g., YYYY-MM format) to documentation to track when content was created or updated, aiding in chronological organization and maintenance.
- Temporary node pool pattern — Infrastructure practice of creating intermediate node pools to facilitate workload transitions during cluster upgrades or maintenance operations.
- Tensor data structure — Multi-dimensional array serving as the fundamental data structure in TensorFlow for representing and manipulating data throughout computational graphs.
- Tensor(張量)數據結構 — Tensors are multi-dimensional arrays that serve as the fundamental data structure in TensorFlow for representing and manipulating data throughout computational graphs.
- TensorFlow — Open-source machine learning framework that uses tensors (multi-dimensional arrays) as data structures and flow-based computational models for neural network development, training, and prediction.
- TeraBox — A cloud storage service offering 1 TB (1024 GB) of free storage with Google account authentication and large file transfer capabilities.
- Term vs match query — Two fundamental Elasticsearch query types: term queries treat the input as a single token for exact matching (e.g., "iphone 手機" as one term), while match queries analyze/tokenize the input for multi-term search.
- Terminal operations (TerminalOp) — Operations that trigger the actual computation of a stream pipeline and produce a result or side-effect, such as forEach, collect, or reduce, as opposed to intermediate operations that return new streams.
- Terraform — An infrastructure as code tool for automating and provisioning infrastructure resources using declarative configuration files
- Terraform configuration files — The fundamental unit of Terraform configuration stored in *.tf files that define infrastructure resources, providers, variables, and modules using HCL syntax.
- Terraform configuration structure — Terraform configurations are authored in files with the .tf extension using HCL syntax, defining infrastructure resources and their relationships.
- Terraform Docker provider — Terraform provider that enables infrastructure definition for Docker containers, images, and related resources
- Terraform GKE version management — Version control practices for Google Kubernetes Engine using Terraform variables and state refresh operations to ensure infrastructure consistency.
- Terraform Helm provider — A Terraform plugin that enables Infrastructure as Code management of Helm chart releases through declarative configuration and the helm_release resource type.
- terraform-configuration-files-hcl — Terraform configuration files written in HashiCorp Configuration Language (HCL) with .tf extension that define infrastructure resources, providers, and variables
- terraform-workflow-automation — Standardized Terraform operational workflow including init (initialize), plan (preview changes), apply (execute changes), and destroy (cleanup) for managing infrastructure lifecycle
- test-templater — A minimal test document or template used to validate the Templater plugin functionality in Obsidian
- Tfswitch Terraform Version Management — Tool for switching between different Terraform versions on the same machine, useful for testing across multiple Terraform versions or maintaining compatibility with different projects
- Think Before Coding principle — Agents should expose ambiguities and ask clarifying questions rather than guessing intent, showing trade-offs before implementation rather than proceeding blindly.
- Thread blocking state — A thread state where a thread is blocked waiting for a monitor lock to enter a synchronized block or method
- Thread waiting state — A thread state where a thread is paused and waiting for another thread to perform a specific action or notification
- Thread-safe handler management — ChannelPipeline is thread-safe, allowing ChannelHandlers to be added or removed at runtime without requiring synchronization, with each pipeline maintaining handler references.
- Three-Layer Chunking Strategy — Multi-modal document chunking approaches: Recursive (5-level delimiter hierarchy, 300 words + 50 overlap for timelines), Semantic (cosine similarity of adjacent sentences to detect topic boundaries for compiled truth), and LLM-guided (Claude Haiku sliding window for high-value content).
- Thrift Server Models — Different server implementation patterns in Apache Thrift: simple single-threaded, thread pool, nonblocking, and half-sync/half-async (THsHa) for handling concurrent RPC requests.
- Thrift Transport Layers — Underlying data transmission mechanisms in Apache Thrift including socket-based (TSocket), framed (TFramedTransport), file-based (TFileTransport), and in-memory (TMemoryTransport) options.
- Thymeleaf attribute processor — Thymeleaf's th:id and other attribute processors that can replace standard HTML attributes to enable dynamic template rendering.
- Thymeleaf expression syntax — Core Thymeleaf template expression types: ${} for variable expressions (OGNL), *{} for selection expressions, #{} for internationalization, @{} for URL expressions, and ~{} for fragment expressions.
- Thymeleaf version 3 Spring configuration — Maven property configuration for upgrading Spring Boot projects to Thymeleaf version 3.0.11.RELEASE with matching layout-dialect version 2.3.0.
- Tiered Entity Enrichment — Three-tier approach (Tier 1/2/3) for creating and updating person/company pages with progressively detailed information, extracting entities from content and cross-linking related pages automatically.
- Tiered infrastructure requirements — Hardware resource requirements mapped to project phases, with 4-core/8GB supporting dashboard setup, 4-core/16GB for Jenkins, and 8-core/24GB for Prometheus.
- TLS certificate chain management — Techniques for combining certificate files (certificate.crt, ca_bundle.crt) and converting to PKCS12 format for web server configuration
- TLS Protocol — The cryptographic protocol that provides secure communication over networks, working in conjunction with digital certificates to establish encrypted connections and verify server identities.
- TNS connection descriptor format — The syntax and structure for defining Oracle database connection parameters, including protocol, host, port, and service name specifications within a DESCRIPTION block.
- toast-notification-animation-settings — Toastr configuration for notification visual effects including showEasing, hideEasing easing curves and showMethod, hideMethod animation types
- toast-notification-timeout-controls — Configuration options in toastr controlling notification display duration, including timeOut, showDuration, hideDuration, and extendedTimeOut parameters
- toast-notification-types — Predefined notification severity levels in toastr including success, warning, and error methods for different message types
- toast-positioning-configuration — The toastr option 'positionClass' that controls where notification messages appear on screen (e.g., toast-bottom-left)
- Toastr JavaScript Notification Library — A jQuery-based JavaScript plugin for creating non-blocking toast notifications with customizable positioning, animation, and timeout options
- Token cost tracking by model — Monitoring capability that attributes API usage and expenses to specific AI models, enabling cost analysis and optimization decisions for multi-model agent systems.
- Token predictability scoring — Method for ranking tokens by their reconstructability using language model probability distributions, identifying which tokens can be safely removed without information loss based on their predictability in context.
- token压缩优化 — 通过知识图谱持久化和查询,AI助手只需读取压缩后的图谱而非原始文件,实测在Karpathy repos语料上实现71.5倍token压缩,显著降低查询成本
- Tomcat base image with JMX exporter — Dockerfile construction combining JRE 8u112, Apache Tomcat 8.5.51, and JMX Prometheus JavaAgent for JVM metrics exposure on port 12346, with timezone and locale configuration for Chinese environments.
- Tomcat Docker image — Official Docker Hub image for Apache Tomcat servlet container, including pull commands and port mapping configuration (default 8080).
- Tomcat SSL/TLS configuration — Configuration of HTTPS connectors in Tomcat server.xml using PKCS12 keystores with TLS protocol settings
- Tomcat systemd service reconfiguration — Procedure for modifying systemd service units for Tomcat during infrastructure migration, including backing up service files, updating IP addresses in service definitions, and reloading systemd daemon
- Tool call parser configuration — vLLM部署时的关键配置项tool-call-parser必须匹配模型版本(Qwen 3.6使用qwen3),配合enable-auto-tool-choice才能实现真正的工具调用而非描述性调用
- Tool checklist template — A structured document format for tracking and managing software tools across different platforms, using links, categorization, and metadata to organize tool inventories.
- Tool documentation template — A standardized organizational structure for documenting software tools and platforms, including sections for origin/purpose, description, download/access information, and usage instructions.
- Tools categorization system — A hierarchical classification approach using numbered prefixes and category labels to organize documentation about software tools, utilities, and applications within a knowledge base.
- Tor Browser Containerization — Running Tor Browser in a Docker container with X11 forwarding for privacy-focused browsing within an isolated environment, maintaining separation from host system.
- Tornado web framework — A Python web framework and asynchronous networking library that supports WebSockets, used in this demo application to demonstrate real-time bidirectional communication through an Istio service mesh.
- TP-Link C2100 Router Setup — Basic network configuration and connection process for the TP-Link C2100 home router, including accessing the default password located on the device's rear panel.
- Tracing — The practice of tracking and analyzing the complete path of requests as they flow through distributed systems, also known as call chain or call chain analysis.
- Traditional deployment era limitations — Early application deployment on physical servers without resource constraints caused allocation problems, leading to underutilized resources when applications were isolated on separate machines.
- Traefik — A modern cloud-native reverse proxy and load balancer that integrates with Docker through container labels for automatic service discovery and routing configuration.
- Traefik frontend rules — Routing logic definitions that determine how incoming requests are matched and directed to backend services, typically using domain-based Host matching conditions.
- Traffic Splitting Strategies — Methods for distributing network traffic between multiple service versions, with priority order: header-based routing (highest), cookie-based routing (middle), and weight-based routing (lowest).
- Transport Layer Abstraction — Decoupled model provider interface that treats Anthropic, Chat Completions, Responses API, and AWS Bedrock as interchangeable transports—reducing provider integration cost and enabling unified retry/rate-limit/log governance.
- Travis CI — A continuous integration (CI) platform for testing and deploying code with confidence, used in DevOps workflows.
- triple verification framework — Validation criteria for mental model extraction requiring cross-domain recurrence (appears in 2+ contexts), predictive power (can infer stance on new problems), and exclusivity (not generic smart-person thinking).
- TUI and Web UI companion pattern — Design approach where terminal user interfaces and browser-based dashboards independently read from the same data source, allowing simultaneous operation with complementary feature sets.
- TUI/Web UI component libraries — 通用终端 UI 库(pi-tui,差分渲染)和 Web chat 组件(pi-web-ui),可独立使用于 AI Agent 界面开发,分离关注点使 UI 层可复用。
- Turbo Quant asymmetric KV cache compression — Memory optimization technique that reduces precision of secondary context data while maintaining high precision for critical information, enabling large context windows within limited VRAM
- Two-tier proxy architecture for Kubernetes — Infrastructure pattern combining external NGINX reverse proxy (hdss7-12) forwarding to Kubernetes Ingress NodePort, enabling domain-based routing to internal services.
- type-parameter-independence — The ability of generic methods to declare type parameters separately from class-level generics, allowing methods to accept and return different types than the enclosing class.
- Unchecked functional interface for checked exceptions — A custom functional interface (BiFunctionUnchecked) that bridges Java's functional interfaces with checked exceptions like SQLException, enabling exception-aware lambda expressions
- Unicode emoji support — Capability to include emoji characters (😜) directly in Markdown content for visual emphasis or annotation
- Union filesystem and layered images — Storage technology (like AuFS) that combines multiple read-only layers with a writable layer into a single unified mount point, enabling efficient image distribution and modification
- URL encoding and web scraping — Techniques for encoding data for safe transmission in HTTP requests and extracting information from web pages, using tools like curl, sed, and jq to parse HTML responses and API data.
- USE and RED monitoring principles — Industry-standard frameworks for planning monitoring metrics: USE focuses on resource monitoring (Utilization, Saturation, Errors), while RED focuses on service monitoring (Rate, Errors, Duration).
- Value semantics in Go range loops — Go's range loops provide copies of slice elements rather than references, requiring the use of index-based access when modifying original slice elements.
- VcXsrv Windows X Server Setup — X Window System server installation on Windows to enable Linux GUI applications to display their graphical interfaces, used as companion for WSL and Docker GUI workflows.
- Verdent 适配五层级模型 — 将 Agent Skills 映射到 Verdent 工具的配置层级:层级 1 verdent.md(全局行为原则)、层级 2 agents.md(项目级规则)、层级 3 Plan Rules(规划阶段强化)、层级 4 Custom Sub-agents(专家人设)、层级 5 Parallel Workspaces(并行工作区)。
- Version tagging strategy — Independent versioning scheme for Bookinfo samples separate from Istio versions, allowing samples to work with any Istio version while maintaining backward compatibility.
- Vertical Pod Autoscaler (VPA) — Autoscaler that automatically recommends and applies optimal CPU and memory resource requests/limits by analyzing historical metrics data, requiring pod deletion and recreation to apply updated configurations via its Updater component.
- ViewResolver — Strategy interface in Spring MVC that resolves logical view names to actual View implementations, with InternalResourceViewResolver being a common implementation for JSP-based views and templating.
- virtual machine (VM) — Complete operating system environment running on physical servers with hardware virtualization, providing strong isolation but requiring full OS resources and startup time compared to containers.
- Virtualization deployment era — VM technology introduced safe isolation between applications on single physical servers, improving resource utilization and scalability, though each VM requires a complete operating system overhead.
- Vision add-on system for MLX Engine — Specialized plugins (Gemma3, Pixtral, Mistral3, LFM2) that extend ModelKit to handle vision-language models with specific architectures, mapped via model_type configuration.
- vLLM deployment for Qwen models — 使用vLLM框架部署Qwen 3.6 27B的完整方案,包括关键参数配置(enable-auto-tool-choice、tool-call-parser qwen3、max-model-len 32768)和常见部署陷阱
- vmrest command-line tool — The command-line utility (vmrest.exe) used to configure and manage the VMware Workstation REST API service, including credential setup with the -C flag and service startup.
- vmrest credential configuration — The process of setting up authentication credentials for VMware Workstation REST API using the vmrest -C command, which requires username entry and password confirmation.
- VMware network NET mode — VMware's networking configuration mode that allows virtual machines to communicate with each other and the host system through a virtual network, requiring proper IP configuration and network settings in both guest and host systems.
- VMware Workstation REST API — A RESTful web service interface for VMware Workstation Pro that enables programmatic control and management of virtual machines through HTTP requests, providing automation capabilities for VM operations.
- VMware Workstation REST API authentication setup — The process of configuring credentials for VMware Workstation's REST API service using the vmrest.exe command-line tool with the -C flag to create or update authentication credentials
- Volume Mount Development — The practice of mounting source code from the host machine into a development container using Docker volumes, allowing the container to access and modify the codebase while maintaining isolation of the development environment.
- Volume size limiting — A Kubernetes storage management technique using the sizeLimit parameter to enforce storage quotas on volumes, particularly important for memory-backed emptyDir volumes to prevent excessive RAM consumption.
- volume-mounting-for-development-environment-sync — Using Docker -v ${PWD}:/work to mount the current working directory into the container, enabling real-time code sync between host and container for iterative development without rebuilding.
- VPA Admission Controller webhook — VPA component that intercepts Pod creation via Webhook to apply updated requests/limits before Pods are created by Deployment
- VPA component architecture — Vertical Pod Autoscaler consists of three components that work together: Recommender monitors historical metrics and calculates optimal resource requests/limits, Updater evicts pods requiring updates when in Auto mode, and Admission Controller webhook injects updated resource values before new pods are created.
- VPA installation on Kubernetes — VPA requires installing Custom Resource Definitions and three control plane components (vpa-recommender, vpa-updater, vpa-admission-controller) via the autoscaler repository's vpa-up.sh script, with TLS certificate generation for webhook admission control.
- VPA Recommender — VPA component that monitors resource utilization history and calculates recommended requests/limits values for containers
- VPA resource policy configuration — VPA resource policies specify container scope (containerName with wildcard support), adjustment bounds (minAllowed/maxAllowed), and which resource types to control (cpu, memory) through controlledResources field.
- VPA resource policy containerPolicies — VPA configuration specifying which containers to monitor, resource type constraints (cpu/memory), and minAllowed/maxAllowed bounds for resource adjustments
- VPA resource recommendation bounds — VPA provides four types of resource recommendations: Lower Bound (triggers pod replacement if requests fall below), Upper Bound (triggers replacement if requests exceed), Target (recommended optimal value within min/max constraints), and Uncapped Target (unconstrained recommendation ignoring minAllowed/maxAllowed limits).
- VPA update modes — VPA offers four operational modes controlling automatic resource adjustment behavior: Off (recommendations only), Initial (apply once at pod creation), Auto (continuous automatic updates), and Recreate (force pod recreation on updates).
- VPA Updater — VPA component that evicts Pods requiring resource request/limit updates, triggered when Recommender provides new recommendations in Auto mode
- VSCode Dev Container — Visual Studio Code's Remote Development feature that enables development environments to run inside Docker containers with code mounted via volumes, providing consistent and isolated development setups.
- VSCode Dev Container integration with Kubernetes tools — Development workflow pattern using Visual Studio Code's dev container feature to provision containerized development environments pre-configured with Kubernetes tools like kind and kubectl.
- VSCode mouse wheel font zoom — A VSCode editor feature that enables changing font size using Ctrl key combined with mouse wheel scroll, providing quick zoom adjustment without accessing menus.
- VSCode settings.json configuration — The JSON-based configuration file where VSCode user and workspace settings are stored and modified
- Vue.js and jQuery Coexistence — Approaches and patterns for safely using jQuery plugins alongside Vue.js applications, allowing integration of legacy jQuery functionality within modern Vue frameworks.
- Vue.js Single-File JavaScript Components — A simplified component format for Vue.js that uses plain JavaScript files instead of .vue files, defining component templates, data, and logic within a JavaScript module for direct browser usage.
- VuePress Blog Theme Reco — A VuePress blog theme with comprehensive configuration options including navigation, search, blog settings, and friend links for building personal blogs.
- VuePress GitHub Actions Deployment — Automated deployment workflow for VuePress sites using GitHub Actions to build and deploy on push to main branch
- W3C Resource Timing API — W3C standard API for measuring detailed timing of individual network resources loaded by a page, providing granular performance data for each resource beyond the overall navigation timing.
- Waiting to blocking state transition — The mechanism and conditions under which a thread transitions from waiting state to blocked state in Java's thread lifecycle
- web-to-notes-integration — The practice of seamlessly moving discovered content from web sources into structured note-taking systems, preserving formatting and enabling efficient knowledge capture during research.
- Webhook testing — The development practice of validating callback endpoints from services like GitHub, Slack, and Telegram during local development using tunneling tools.
- WebJars integration — Method for serving client-side web libraries (JAR-packaged CSS/JS) in Spring Boot through the /webjars/** endpoint mapped to classpath:/META-INF/resources/webjars/
- WebLogic 12C installation — Installation procedures and setup for Oracle WebLogic Server 12C version, including version 12.1.3 updates and configuration
- WebLogic Diagnostics Framework (WLDF) — Oracle's diagnostic and monitoring framework for WebLogic Server, providing configuration capabilities for tracking and analyzing application server behavior
- WebLogic Docker Deployment — Containerized setup practice for WebLogic application server using Docker Compose with exposed ports (7001, 7002, 5556) and default credentials (weblogic/welcome1)
- WebLogic domain creation — Post-installation step in WebLogic Server setup where a basic domain is configured to host applications and resources, as referenced in Oracle documentation.
- WebLogic EJB testing and JNDI lookup — The process of testing Enterprise JavaBeans (EJB) deployed on Oracle WebLogic Server using JNDI (Java Naming and Directory Interface) to locate and invoke remote EJB services programmatically through InitialContext with WebLogic-specific properties.
- WebLogic generic JAR installer — The platform-independent Java archive file (fmw_12.2.1.4.0_wls.jar or fmw_12.2.1.4.0_wls_generic.jar) used to distribute and install WebLogic Server 12c across different operating systems
- WebLogic JDBC DataSource — A JNDI-bound resource object in WebLogic Server that provides database connection pooling capabilities to applications through connection borrowing from a managed connection pool.
- WebLogic JNDI InitialContext configuration — Configuration pattern for establishing JNDI connections to WebLogic Server using InitialContext with properties like WLInitialContextFactory, provider URL, and security credentials.
- WebLogic JNDI integration — Configuring Spring applications to connect to WebLogic server resources using JNDI with WLInitialContextFactory and t3 protocol URLs
- WebLogic Server 12c installation — Oracle WebLogic Server 12.2.1.4 installation process using the generic JAR installer (fmw_12.2.1.4.0_wls_generic.jar), executed via Java command line.
- WebLogic T3 protocol — WebLogic's proprietary RMI-based network protocol used for client-server communication, specified in JNDI PROVIDER_URL using the t3:// scheme (e.g., t3://localhost:8080) for connecting to WebLogic Server instances.
- WebLogic version 12.2.1.4 — Specific release version of Oracle WebLogic Server 12c (Fusion Middleware 12.2.1.4.0), representing the 12c family with specific patch level and feature set.
- WebSocket protocol upgrade — The HTTP mechanism that allows a client to request a protocol switch from standard HTTP to the WebSocket protocol for persistent, bidirectional communication, supported in Istio v1alpha3 routing rules since v0.8.0.
- WebSocket-based real-time monitoring — Architecture pattern where monitoring dashboards maintain persistent connections to data sources for automatic updates without manual refresh, essential for observing dynamic AI agent behavior.
- Wildcard domain certificates — Using *.domain.com syntax in Common Name field to create certificates valid for all subdomains of a base domain, enabling flexible SSL coverage across multiple hosts.
- Wildcard domain proxy routing — NGINX server configuration using wildcard server_name (*.od.com) to route multiple subdomains through a single proxy configuration to Kubernetes ingress endpoints.
- window.performance API — Browser interface providing performance-related metrics including memory usage, navigation timing, and resource timing data for web applications.
- window.performance.memory — Browser API that provides memory usage information for web applications, allowing monitoring of JavaScript heap size and memory consumption patterns as part of frontend performance analysis.
- window.performance.navigation — Browser API that provides navigation type information including redirect count, enabling analysis of page navigation patterns and the performance impact of redirects on page load times.
- Windows 11 drag-and-drop execution fix — A workaround for Windows 11 drag-and-drop file execution issues that involves disabling User Account Control through registry modification
- Windows CMD commands — Collection of command-line utilities and syntax for Windows Command Prompt, covering system administration and network diagnostics
- Windows development environment setup workflow — A recommended installation sequence for Windows development tools: WSL2 first, then Chocolatey package manager, then Docker Desktop, ensuring proper dependencies and compatibility.
- Windows Netcat — The Windows implementation of netcat, a network utility for reading from and writing to network connections using TCP or UDP, often used for port listening, debugging, and network communication testing.
- Windows Package Manager — Package management solution for Windows that enables automated installation, update, and management of software applications through command-line interface.
- Windows package manager ecosystem — The interconnected tooling landscape on Windows comprising Chocolatey as the base package manager, which enables installation of specialized version managers like SDKMAN and NVM for different development ecosystems.
- Windows port monitoring with netstat — Using netstat command with findstr to check which ports (80, 443) are in use on Windows systems for troubleshooting and service verification.
- Windows process memory metrics — Key memory performance indicators for Windows processes including PM (Private Memory), WS (Working Set), VM (Virtual Memory), and NPM (Non-Paged Memory), used to analyze process resource consumption.
- Windows Registry System — A hierarchical configuration database in Windows operating systems that stores low-level settings for the OS and applications, accessible via regedit.exe and organized in keys like HKEY_LOCAL_MACHINE
- Windows Run dialog (regedit) — The Windows+R shortcut that opens the Run dialog box, providing quick access to system tools like the Registry Editor (regedit) for administrative tasks
- Windows socket permission errors — Network error condition when attempting to bind to a TCP port that is either occupied or protected, manifesting as 'An attempt was made to access a socket in a way forbidden by its access permissions'
- Windows Subsystem for Linux (WSL) — Windows compatibility layer for running Linux environments natively on Windows, typically installed first when setting up a development machine to enable Linux toolchains.
- Windows Terminal same-directory navigation — Configuration technique for opening new tabs or panes in Windows Terminal that inherit the current working directory from the active session, using ANSI escape sequences in shell profiles.
- Windows tools catalog — Organized inventory of Windows-specific development tools, utilities, and command-line resources categorized for quick reference and discovery.
- Windows Zabbix agent service management — Windows-specific commands for managing Zabbix Agent and Tomcat8 services using net stop/net start commands, used during infrastructure migration
- WinNAT service port conflicts — Windows NAT (WinNAT) service can occupy ports required by Docker containers, causing socket permission errors during bind operations
- workflow-chain-pattern — A design pattern where multiple workflows execute sequentially, with each workflow triggering the next through dispatch events, creating a chain of dependent workflow executions.
- workflow-running-modes — A technique for parameterizing a single workflow to execute in different modes based on command arguments passed in the dispatch event body, allowing one workflow definition to handle multiple execution paths.
- WorkflowEngineServiceRemote interface — A remote EJB service interface for workflow engine operations in a BPM (Business Process Management) system, providing methods to find and manipulate workflow tasks such as findTask() for retrieving workflow task objects by ID.
- workspace container image — Docker-based runtime environment for cloud development platforms that includes pre-configured tools and SDKs, extendable through custom Dockerfiles for specific language versions or toolchains.
- Workspace isolation — Multi-tenant architecture providing workspace-level separation where each workspace maintains independent agents, issues, and settings for team collaboration.
- wrangler (Cloudflare Workers CLI) — Command-line tool for creating, developing, and deploying Cloudflare Workers projects with TypeScript support and KV namespace binding
- WSL command-line interoperability — Ability to execute Linux tools and commands directly from Windows command line (PowerShell/CMD) using the 'wsl' prefix, enabling cross-platform command execution without switching environments.
- WSL custom distribution import — The process of creating a new WSL instance by importing a tar archive to a specified storage location using PowerShell commands
- WSL default user configuration — The process of setting and configuring the default user account for a WSL instance using the Ubuntu configuration utility, affecting which user account is used automatically on instance launch.
- WSL import and export management — Commands and procedures for backing up, distributing, and restoring WSL distributions using tar archives and the wsl --import, --export, and --unregister commands.
- WSL instance lifecycle management — Administrative operations for controlling WSL distributions including termination, export for backup, and removal via unregister commands
- WSL networking and interoperability — Bidirectional network access between Windows and WSL Linux environments, including localhost access from Windows to Linux applications and accessing Windows services from Linux via host IP resolution.
- WSL user and root configuration — Post-installation configuration tasks for WSL instances, including changing the root password, setting default users via wsl.conf, and managing user credentials through passwd commands.
- WSL user configuration automation — Automated setup of default user and password management in WSL by modifying /etc/wsl.conf and using passwd commands in batch scripts
- WSL2 for development — Windows Subsystem for Linux 2, positioned as the foundational tool to install first when setting up a new Windows computer for development work.
- WSL2 manual installation — Step-by-step procedure for manually installing WSL2 on Windows, including enabling Windows Subsystem for Linux, enabling Virtual Machine Platform, installing the Linux kernel update package, and setting WSL 2 as the default version.
- WSL2 package management workflow — The standard procedure for maintaining and installing software in WSL2 environments using apt package manager, including updating repositories and installing graphical applications.
- WSL2 tarball installation — A method for installing WSL2 distributions using downloaded tarball files rather than Microsoft Store, enabling custom distribution instances and offline installation.
- WSL2 Ubuntu Instance Management — Techniques for installing and managing multiple Ubuntu instances within the Windows Subsystem for Linux 2 environment
- WSLg (WSL Graphics) — Windows Subsystem for Linux GUI support enabling graphical Linux applications to run on Windows desktop
- WSLg GUI application support — WSLg (WSL Graphics) capability for running Linux GUI applications directly on Windows desktop without additional configuration, enabling native Windows display of Linux graphical software.
- X-Frame-Options header — A security header that controls whether a website can be embedded in iframes or frames by other sites, protecting against clickjacking attacks with DENY, SAMEORIGIN, and ALLOW-FROM directives.
- X.509 certificate inspection with OpenSSL — Using openssl x509 -in certificate.crt -text -noout command to view detailed certificate information including issuer, validity period, public key, and extensions.
- X.509 Client Certificate Authentication — A Kubernetes authentication method where clients present certificates signed by the cluster's Certificate Authority (CA), with the certificate's Common Name (CN) field determining the username.
- X.509憑證格式 — X.509是公開金鑰憑證的標準格式,用於定義憑證結構包含版本、序號、簽章演算法、發行者、有效期、主體、公開金鑰資訊及擴充欄位等欄位。
- X11 Forwarding with Docker on WSL — Configuration for running GUI applications from Docker containers on Windows Subsystem for Linux using X11 socket mounting and DISPLAY environment variable forwarding.
- xcall distributed command execution script — Shell script template for executing commands across multiple cluster nodes via SSH, enabling parallel command execution on host groups
- Xiaomi MiMo V2 Pro — Xiaomi's flagship AI model with 1 trillion+ parameters and 1 million token context window, optimized for agentic workflows including planning, tool use, error recovery, and multi-step decision-making.
- Xposed Payment Interception — A technical approach using the Xposed framework (or VirtualXposed) to intercept and modify payment application behavior on Android devices, enabling custom payment routing or automated callback mechanisms.
- xsai LLM interaction library — Custom TypeScript library similar to Vercel AI SDK for unified interaction with 30+ LLM providers
- xsync distributed file synchronization script — Shell script template for copying files to multiple cluster nodes using rsync in a loop, automating distribution across configured hosts
- xxpay Open Source Payment Platform — An open-source aggregated payment solution (xxpay) that provides source code and framework for building integrated payment systems supporting multiple Chinese payment channels.
- YAML overlay pattern — A configuration management technique where multiple YAML files are layered or叠加 (stacked) to customize base configurations without modifying original files, enabling environment-specific variations.
- Yum Cache Generation — Using 'yum makecache' to pre-populate package metadata caches after repository configuration changes to improve subsequent package operations.
- Yum Source Backup Pattern — The practice of creating backup copies of repository configuration files before making changes, enabling rollback if issues occur.
- Zabbix agent configuration for Windows monitoring — Required Zabbix agent settings including EnableRemoteCommands, UnsafeUserParameters, Timeout, and UserParameter directives for Windows environments
- Zabbix agent configuration migration — Process of updating zabbix_agentd.conf files when moving infrastructure between environments, including backing up existing configurations and modifying Server directives to support multiple Zabbix server addresses
- Zabbix Alert Media Configuration — Notification delivery system architecture where media_type tables define execution scripts for different alert methods, media tables link users to notification methods, and alerts table preserves historical notification events.
- Zabbix data retention architecture — Two-tier data storage system where history tables store raw monitoring data for short periods (hours to days), while trends tables aggregate hourly statistics for long-term graphing and analysis.
- Zabbix database schema — Core database table structure and relationships in the Zabbix monitoring system, including hosts, items, triggers, and related configuration tables.
- Zabbix Host and Item Model — Core entity relationship where hosts represent monitored devices (agents, proxies) with associated metadata, and items are individual data collection points with unique identifiers, collection intervals, and status tracking.
- Zabbix multi-server configuration — Configuration pattern where Zabbix agents accept connections from multiple servers by using comma-separated Server values in zabbix_agentd.conf, enabling active-active or failover monitoring topologies
- Zabbix template export and import format — XML-based template structure for defining monitoring items, applications, graphs, and discovery rules in Zabbix monitoring system
- Zabbix Triggers and Actions — The event-driven mechanism in Zabbix where triggers (defined expressions using functions like max, last, nodata) when activated initiate predefined actions, with dependency relationships tracked between triggers.
- Zabbix UserParameter with PowerShell on Windows — Technique for creating custom monitoring parameters in Zabbix on Windows systems using PowerShell scripts as the data collection method
- Zero downtime deployment strategies — Deployment approaches that maintain service availability during updates by keeping at least one version operational throughout the transition process
- Zero-copy I/O optimization — A technique for transferring data from files to network sockets without copying data between kernel and user space buffers, significantly improving performance.
- Zero-Downtime Deployment Validation Pattern — Verifying successful traffic switch in blue/green deployments by repeatedly calling application endpoints to confirm new version responses before decommissioning old resources.
- Zero-downtime deployment workflow with configuration centers — CI/CD pipeline pattern where code changes are built once into container images, then deployed sequentially to test environment for validation before production deployment, with configuration managed externally through Apollo rather than image rebuilds.
- Zettelkasten — A personal knowledge management methodology emphasizing atomic, interconnected notes with extensive cross-linking to facilitate knowledge development and unexpected connections between ideas.
- Zettelkasten 12 principles — Twelve foundational guidelines for effective Zettelkasten practice, including atomicity, independence, mandatory linking, annotation, personal paraphrasing, source tracking, and never deleting old notes.
- Zettelkasten Knowledge Base — A personal knowledge management system with 2,653 pages organized through Zettelkasten methodology, featuring atomic notes, extensive cross-linking, and hierarchical organization structures including MOCs and backlinks.
- Zettelkasten Knowledge Management System — A personal knowledge management methodology with 2,653 pages emphasizing atomic, interconnected notes with extensive cross-linking to facilitate knowledge development and unexpected connections between ideas.
- Zettelkasten methodology — A personal knowledge management system emphasizing atomic, interconnected notes with extensive cross-linking to facilitate knowledge development and unexpected connections between ideas
- Zettelkasten note types — Four categories of notes in the Zettelkasten system: fleeting notes (temporary inspiration), permanent notes (refined ideas), literature notes (reading notes), and project-related notes.
- Zettelkasten principles — Twelve foundational guidelines for effective Zettelkasten practice, including atomicity, independence, mandatory linking, annotation, personal paraphrasing, source tracking, and never deleting old notes.
- zkServer.sh management commands — Shell script utilities for ZooKeeper server lifecycle management including start, status, and stop operations, with jps for Java process verification.
- ZooKeeper cluster configuration — Multi-server ZooKeeper ensemble configuration including tickTime, initLimit, syncLimit, and server peer communication ports (2888 for leader election, 3888 for communication).
- ZooKeeper server ports — Three distinct port types in ZooKeeper: clientPort (2181) for client connections, 2888 for peer leader election, and 3888 for inter-server communication.
- ZooKeeper single-server setup — Basic installation and configuration procedure for running a standalone ZooKeeper server, including extraction, configuration file setup, and data directory initialization.
- Zookeeper集群部署 — 作为Dubbo服务的注册中心,Zookeeper提供统一命名服务、状态同步、集群管理和分布式配置管理。通过3节点集群部署(1个leader+2个follower)实现高可用,使用myid文件标识节点身份。
- Zsh scripting language — A Unix shell and scripting language used for writing shell scripts and command-line automation, featuring advanced features like parameter expansion and array manipulation.
- 尚矽谷-SpringBoot2核心技術 — A comprehensive course series covering Spring Boot 2 core technologies and fundamental concepts, available as video tutorials.
- 尚硅谷 Java Design Patterns Course — A comprehensive video course series by 韩顺平 covering all 23 Gang of Four design patterns with graphical explanations and framework source code analysis, available on both Bilibili and YouTube platforms.
- 尚硅谷 Java设计模式课程 — A comprehensive video course series by 韩顺平 covering all 23 Gang of Four design patterns with graphical explanations and framework source code analysis, available on both Bilibili and YouTube platforms.
- 状态模式 (State Pattern) — A behavioral design pattern that allows an object to alter its behavior when its internal state changes, appearing as if the object changes its class—demonstrated through practical case studies in the referenced learning materials.
- 秒殺 (Flash Sale) — An e-commerce sales strategy where products are offered at significant discounts for very short durations (seconds or minutes), creating urgency through limited-time offers and scarcity tactics to drive rapid customer acquisition and high demand.
- 解释器模式 (Interpreter Pattern) — A behavioral design pattern that defines a grammatical representation for a language and provides an interpreter to process sentences in that language, part of the 23 classic patterns covered in comprehensive design pattern courses.
- 访问者模式 (Visitor Pattern) — A behavioral design pattern that separates algorithms from the object structure they operate on, allowing new operations to be added without modifying the existing object classes—mentioned in the context of assembly programming tutorials.
- 開發者工具與框架 —
- 限时抢购营销策略 (Time-Limited Sale Marketing Strategy) — A promotional approach that uses time constraints and discounted pricing to create psychological urgency in consumers, typically characterized by countdown timers, limited inventory, and aggressive discounting to trigger impulse purchases.
- 饥饿营销 (Hunger Marketing Strategy) — A marketing technique that deliberately restricts product supply or availability to increase perceived value and consumer demand, often used in conjunction with flash sales to amplify the urgency and exclusivity of offers.
- 黄金开局60分钟 — The first 60 minutes after waking set the brain's default mode for the entire day. Active launch sequence includes: rise within 5 minutes, natural light + water, 3-5 min light movement, no information flow/social media, write the single MIT.
2207 pages | Generated 2026-04-28T16:30:22.218Z