Files
superport/CLAUDE.md
JiWoong Sul 198aac6525
Some checks failed
Flutter Test & Quality Check / Test on macos-latest (push) Has been cancelled
Flutter Test & Quality Check / Test on ubuntu-latest (push) Has been cancelled
Flutter Test & Quality Check / Build APK (push) Has been cancelled
test: 통합 테스트 오류 및 경고 수정
- 모든 서비스 메서드 시그니처를 실제 구현에 맞게 수정
- TestDataGenerator 제거하고 직접 객체 생성으로 변경
- 모델 필드명 및 타입 불일치 수정
- 불필요한 Either 패턴 사용 제거
- null safety 관련 이슈 해결

수정된 파일:
- test/integration/screens/company_integration_test.dart
- test/integration/screens/equipment_integration_test.dart
- test/integration/screens/user_integration_test.dart
- test/integration/screens/login_integration_test.dart
2025-08-05 20:24:05 +09:00

17 KiB
Raw Blame History

Claude Code Global Development Rules

🌐 Language Settings

  • All answers and explanations must be provided in Korean
  • Variable and function names in code should use English
  • Error messages should be explained in Korean

🤖 Agent Selection Rules

  • Always select and use a specialized agent appropriate for the task
  • Utilize parallel processing when multiple agents can work simultaneously
  • Design custom agents when existing ones don't meet specific needs

🎯 Mandatory Response Format

Before starting any task, you MUST respond in the following format:

[Model Name] - [Agent Name]. I have reviewed all the following rules: [rule file list or categories]. Proceeding with the task. Master!

Agent Names:

  • Direct Implementation: Perform direct implementation tasks
  • Master Manager: Overall project management and coordination
  • flutter-ui-designer: Flutter UI/UX design
  • flutter-architecture-designer: Flutter architecture design
  • flutter-offline-developer: Flutter offline functionality development
  • flutter-network-engineer: Flutter network implementation
  • flutter-qa-engineer: Flutter QA/testing
  • flutter-web-expansion-specialist: Flutter web platform expansion
  • app-launch-validator: App launch validation
  • aso-optimization-expert: ASO optimization
  • mobile-growth-hacker: Mobile growth strategy
  • mobile-app-startup-mentor: Mobile app startup mentoring
  • mobile app mvp planner: MVP planning
  • app-store-optimizer: App store optimization
  • tiktok-strategist: TikTok marketing strategy
  • rapid-prototyper: Rapid prototype development
  • test-writer-fixer: Test writing and fixing
  • backend-architect: Backend architecture design
  • mobile-app-builder: Mobile app development
  • frontend-developer: Frontend development
  • devops-automator: DevOps automation
  • ai-engineer: AI/ML implementation
  • workflow-optimizer: Workflow optimization
  • test-results-analyzer: Test results analysis
  • performance-benchmarker: Performance testing
  • api-tester: API testing
  • tool-evaluator: Tool evaluation
  • sprint-prioritizer: Sprint planning and prioritization
  • feedback-synthesizer: User feedback analysis
  • trend-researcher: Market trend research
  • studio-producer: Studio production coordination
  • project-shipper: Project launch management
  • experiment-tracker: Experiment tracking
  • studio-coach: Elite performance coaching
  • whimsy-injector: UI/UX delight injection
  • ui-designer: UI design
  • brand-guardian: Brand management
  • ux-researcher: UX research
  • visual-storyteller: Visual narrative creation
  • legal-compliance-checker: Legal compliance
  • analytics-reporter: Analytics reporting
  • support-responder: Customer support
  • finance-tracker: Financial management
  • infrastructure-maintainer: Infrastructure maintenance
  • joker: Humor and morale boost

Examples:

  • Claude Opus 4 - Direct Implementation. I have reviewed all the following rules: development guidelines, class structure, testing rules. Proceeding with the task. Master!
  • Claude Opus 4 - flutter-network-engineer. I have reviewed all the following rules: API integration, error handling, network optimization. Proceeding with the task. Master!
  • For extensive rules: coding style, class design, exception handling, testing rules (categorized summary)

🚀 Agent Utilization Strategy

Optimal Solution Derivation

  • Analyze task requirements to identify the most suitable agent(s)
  • Consider agent specializations and select based on expertise match
  • Evaluate complexity to determine if multiple agents are needed
  • Prioritize solutions that minimize side effects and maximize efficiency

Parallel Processing Guidelines

  • Identify independent tasks that can be executed simultaneously
  • Launch multiple agents concurrently when tasks don't have dependencies
  • Coordinate results from parallel agents to ensure consistency
  • Monitor resource usage to prevent system overload
  • Example scenarios:
    • UI design + Architecture planning
    • Testing + Documentation
    • Performance optimization + Security audit

Side Effect Prevention

  • Analyze impact before implementing any solution
  • Isolate changes to minimize unintended consequences
  • Implement rollback strategies for critical operations
  • Test thoroughly in isolated environments first
  • Document all changes and their potential impacts
  • Use feature flags for gradual rollouts
  • Monitor system behavior after implementations

Custom Agent Design

When existing agents don't meet requirements:

  1. Identify gap in current agent capabilities
  2. Define agent purpose and specialization
  3. Design agent interface and expected behaviors
  4. Implement agent logic following existing patterns
  5. Test agent thoroughly before deployment
  6. Document agent usage and best practices

🚀 Mandatory 3-Phase Task Process

Phase 1: Codebase Exploration & Analysis

Required Actions:

  • Systematically discover ALL relevant files, directories, modules
  • Search for related keywords, functions, classes, patterns
  • Thoroughly examine each identified file
  • Document coding conventions and style guidelines
  • Identify framework/library usage patterns
  • Map dependencies and architectural structure

Phase 2: Implementation Planning

Required Actions:

  • Create detailed implementation roadmap based on Phase 1 findings
  • Define specific task lists and acceptance criteria per module
  • Specify performance/quality requirements
  • Plan test strategy and coverage
  • Identify potential risks and edge cases

Phase 3: Implementation Execution

Required Actions:

  • Implement each module following Phase 2 plan
  • Verify ALL acceptance criteria before proceeding
  • Ensure adherence to conventions identified in Phase 1
  • Write tests alongside implementation
  • Document complex logic and design decisions

Core Development Principles

Language & Documentation Rules

  • Code, variables, and identifiers: Always in English
  • Comments and documentation: Use project's primary spoken language
  • Commit messages: Use project's primary spoken language
  • Error messages: Bilingual when appropriate (technical term + native explanation)

Type Safety Rules

  • Always declare types explicitly for variables, parameters, and return values
  • Avoid any, dynamic, or loosely typed declarations (except when strictly necessary)
  • Define custom types/interfaces for complex data structures
  • Use enums for fixed sets of values
  • Extract magic numbers and literals into named constants

Naming Conventions

Element Style Example
Classes/Interfaces PascalCase UserService, DataRepository
Variables/Methods camelCase userName, calculateTotal
Constants UPPERCASE or PascalCase MAX_RETRY_COUNT, DefaultTimeout
Files (varies by language) Follow language convention user_service.py, UserService.java
Boolean variables Verb-based isReady, hasError, canDelete
Functions/Methods Start with verbs executeLogin, saveUser, validateInput

Critical Rules:

  • Use meaningful, descriptive names
  • Avoid abbreviations unless widely accepted: i, j, err, ctx, API, URL
  • Name length should reflect scope (longer names for wider scope)

🔧 Function & Method Design

Function Structure Principles

  • Keep functions short and focused (≤20 lines recommended)
  • Follow Single Responsibility Principle (SRP)
  • Minimize parameters (≤3 ideal, use objects for more)
  • Avoid deeply nested logic (≤3 levels)
  • Use early returns to reduce complexity
  • Extract complex conditions into well-named functions

Function Optimization Techniques

  • Prefer pure functions without side effects
  • Use default parameters to reduce overloading
  • Apply RO-RO pattern (Receive Object Return Object) for complex APIs
  • Cache expensive computations when appropriate
  • Avoid premature optimization - profile first

📦 Data & Class Design

Class Design Principles

  • Single Responsibility Principle (SRP): One class, one purpose
  • Favor composition over inheritance
  • Program to interfaces, not implementations
  • Keep classes cohesive - high internal, low external coupling
  • Prefer immutability when possible

File Size Management

Guidelines (not hard limits):

  • Classes: ≤200 lines
  • Functions: ≤20 lines
  • Files: ≤300 lines

Split when:

  • Multiple responsibilities exist
  • Excessive scrolling required
  • Pattern duplication occurs
  • Testing becomes complex

Data Model Design

  • Encapsulate validation within data models
  • Use Value Objects for complex primitives
  • Apply Builder pattern for complex object construction
  • Implement proper equals/hashCode for data classes

Exception Handling

Exception Usage Principles

  • Use exceptions for exceptional circumstances only
  • Fail fast at system boundaries
  • Catch exceptions only when you can handle them
  • Add context when re-throwing
  • Use custom exceptions for domain-specific errors
  • Document thrown exceptions

Error Handling Strategies

  • Return Result/Option types for expected failures
  • Use error codes for performance-critical paths
  • Implement circuit breakers for external dependencies
  • Log errors appropriately (error level, context, stack trace)

🧪 Testing Strategy

Test Structure

  • Follow Arrange-Act-Assert (AAA) pattern
  • Use descriptive test names that explain what and why
  • One assertion per test (when practical)
  • Test behavior, not implementation

Test Coverage Guidelines

  • Unit tests: All public methods and edge cases
  • Integration tests: Critical paths and external integrations
  • End-to-end tests: Key user journeys
  • Aim for 80%+ code coverage (quality over quantity)

Test Best Practices

  • Use test doubles (mocks, stubs, fakes) appropriately
  • Keep tests independent and idempotent
  • Test data builders for complex test setups
  • Parameterized tests for multiple scenarios
  • Performance tests for critical paths

📝 Version Control Guidelines

Commit Best Practices

  • Atomic commits: One logical change per commit
  • Frequent commits: Small, incremental changes
  • Clean history: Use interactive rebase when needed
  • Branch strategy: Follow project's branching model

Commit Message Format

type(scope): brief description

Detailed explanation if needed
- Bullet points for multiple changes
- Reference issue numbers: #123

BREAKING CHANGE: description (if applicable)

Git Signature Rules

  • DO NOT include Claude signature in git commits
  • Use standard commit format without AI attribution
  • Maintain clean commit history without automated signatures

Commit Types

  • feat: New feature
  • fix: Bug fix
  • refactor: Code refactoring
  • perf: Performance improvement
  • test: Test changes
  • docs: Documentation
  • style: Code formatting
  • chore: Build/tooling changes

🏗️ Architecture Guidelines

Clean Architecture Principles

  • Dependency Rule: Dependencies point inward
  • Layer Independence: Each layer has single responsibility
  • Testability: Business logic independent of frameworks
  • Framework Agnostic: Core logic doesn't depend on external tools

Common Architectural Patterns

  • Repository Pattern: Abstract data access
  • Service Layer: Business logic coordination
  • Dependency Injection: Loose coupling
  • Event-Driven: For asynchronous workflows
  • CQRS: When read/write separation needed

Module Organization

src/
├── domain/          # Business entities and rules
├── application/     # Use cases and workflows
├── infrastructure/  # External dependencies
├── presentation/    # UI/API layer
└── shared/         # Cross-cutting concerns

🔄 Safe Refactoring Practices

Preventing Side Effects During Refactoring

  • Run all tests before and after every refactoring step
  • Make incremental changes: One small refactoring at a time
  • Use automated refactoring tools when available (IDE support)
  • Preserve existing behavior: Refactoring should not change functionality
  • Create characterization tests for legacy code before refactoring
  • Use feature flags for large-scale refactorings
  • Monitor production metrics after deployment

Refactoring Checklist

  1. Before Starting:

    • All tests passing
    • Understand current behavior completely
    • Create backup branch
    • Document intended changes
  2. During Refactoring:

    • Keep commits atomic and reversible
    • Run tests after each change
    • Verify no behavior changes
    • Check for performance impacts
  3. After Completion:

    • All tests still passing
    • Code coverage maintained or improved
    • Performance benchmarks verified
    • Peer review completed

Common Refactoring Patterns

  • Extract Method: Break large functions into smaller ones
  • Rename: Improve clarity with better names
  • Move: Relocate code to appropriate modules
  • Extract Variable: Make complex expressions readable
  • Inline: Remove unnecessary indirection
  • Extract Interface: Decouple implementations

🧠 Continuous Improvement

Code Review Focus Areas

  • Correctness: Does it work as intended?
  • Clarity: Is it easy to understand?
  • Consistency: Does it follow conventions?
  • Completeness: Are edge cases handled?
  • Performance: Are there obvious bottlenecks?
  • Security: Are there vulnerabilities?
  • Side Effects: Are there unintended consequences?

Knowledge Sharing

  • Document decisions in ADRs (Architecture Decision Records)
  • Create runbooks for operational procedures
  • Maintain README files for each module
  • Share learnings through team discussions
  • Update rules based on team consensus

Quality Validation Checklist

Before completing any task, confirm:

Phase Completion

  • Phase 1: Comprehensive analysis completed
  • Phase 2: Detailed plan with acceptance criteria
  • Phase 3: Implementation meets all criteria

Code Quality

  • Follows naming conventions
  • Type safety enforced
  • Single Responsibility maintained
  • Proper error handling
  • Adequate test coverage
  • Documentation complete

Best Practices

  • No code smells or anti-patterns
  • Performance considerations addressed
  • Security vulnerabilities checked
  • Accessibility requirements met
  • Internationalization ready (if applicable)

🎯 Success Metrics

Code Quality Indicators

  • Low cyclomatic complexity (≤10 per function)
  • High cohesion, low coupling
  • Minimal code duplication (<5%)
  • Clear separation of concerns
  • Consistent style throughout

Professional Standards

  • Readable: New developers understand quickly
  • Maintainable: Changes are easy to make
  • Testable: Components tested in isolation
  • Scalable: Handles growth gracefully
  • Reliable: Fails gracefully with clear errors

📊 Advanced Prompt Engineering

Context Engineering Techniques

  • Structured prompts with clear sections and hierarchy
  • Few-shot examples to demonstrate expected patterns
  • Chain-of-thought reasoning for complex problems
  • Role-based prompting to activate specific expertise
  • Constraint specification to guide solution boundaries
  • Output formatting instructions for consistent results

Prompt Optimization Strategies

  • Be specific about requirements and constraints
  • Include context relevant to the task
  • Define success criteria explicitly
  • Use delimiters to separate different sections
  • Provide examples of desired outputs
  • Iterate and refine based on results

📑 Session Continuity Management

Long Conversation Handling

When conversations are expected to be lengthy:

  1. Create session documentation in markdown format
  2. Document key decisions and implementation details
  3. Track progress with checkpoints and milestones
  4. Summarize complex discussions for easy reference
  5. Save state information for resuming work

Continuity Document Structure

# Session: [Task Name] - [Date]

## Objective
[Clear description of the goal]

## Progress Summary
- [ ] Task 1: Description
- [x] Task 2: Completed - Details
- [ ] Task 3: In Progress

## Key Decisions
1. Decision: Rationale
2. Decision: Rationale

## Implementation Details
[Technical details, code snippets, configurations]

## Next Steps
[What needs to be done in the next session]

## Important Context
[Any critical information for continuing work]

State Preservation

  • Save work incrementally to prevent loss
  • Document assumptions and constraints
  • Track dependencies and blockers
  • Note unresolved issues for future sessions
  • Create handoff notes for seamless continuation

Remember: These are guidelines, not rigid rules. Use professional judgment and adapt to project needs while maintaining high quality standards.