Memory Management in Operating Systems

Memory Management Fundamentals

  • Definition: Memory management is the process of controlling and coordinating computer memory, assigning portions to programs when needed, and freeing it for reuse when no longer needed.

  • Main Objectives:

    • Efficient utilization of available memory
    • Protection and isolation between processes
    • Sharing of memory when needed (for IPC)
    • Logical organization of memory space
    • Support for virtual memory
  • Memory Hierarchy:

    • Registers: Fastest, smallest, most expensive memory
    • Cache: Very fast, small capacity, high cost per byte
    • Main Memory (RAM): Moderate speed and cost
    • Secondary Storage (HDD/SSD): Slow, large capacity, low cost
    • Example: A modern system might have 32 KB L1 cache, 256 KB L2 cache, 8 MB L3 cache, 16 GB of RAM, and 1 TB of SSD storage
  • Logical vs Physical Address Space:

    • Logical address: Generated by CPU during execution (also called virtual address)
    • Physical address: Actual memory location in RAM
    • Translation done by Memory Management Unit (MMU)
    • Example: A program might use logical address 0x1000, which the MMU translates to physical address 0x7E1000

Swapping

  • Definition: The process of temporarily removing inactive programs from memory to secondary storage to free space for active programs.

  • Swapping Mechanism:

    • OS identifies inactive process
    • Copies entire process image to swap space on disk
    • Loads another process into the freed memory space
    • Later, swapped-out process can be brought back into memory (possibly at a different location)
  • Context Switch with Swapping:

    • Save CPU context of process being swapped out
    • Write process image to disk
    • Read new process image from disk
    • Restore CPU context for new process
    • Example time costs: Save context (0.1ms), swap 4MB process to disk (40ms), load new 4MB process (40ms), restore context (0.1ms) = total ~80ms
  • Advantages:

    • Allows more processes than would fit in physical memory
    • Provides illusion of larger memory than physically available
  • Disadvantages:

    • Significant performance overhead due to disk I/O
    • Entire process must be swapped in/out (inefficient for large processes)

Contiguous Allocation

  • Definition: Memory allocation scheme where each process is allocated a single continuous block of memory.

  • Fixed Partitioning:

    • Memory divided into fixed-size partitions
    • Each partition can contain exactly one process
    • Partitions may be equal or unequal size
    • Example: 32MB RAM divided into 4 partitions of 8MB each
  • Dynamic Partitioning:

    • Partitions created dynamically as processes require
    • Partition size matches process size
    • Example: Process A (10MB), Process B (5MB), Process C (8MB) each get exactly-sized partitions
  • Memory Allocation Strategies:

    • First-Fit: Allocate first available hole large enough
      • Example: Holes of 5MB, 10MB, 7MB, 8MB; process needing 6MB gets the 10MB hole
    • Best-Fit: Allocate smallest hole that is large enough
      • Example: Using same holes, process needing 6MB gets the 7MB hole
    • Worst-Fit: Allocate largest available hole
      • Example: Using same holes, process needing 6MB gets the 10MB hole
  • Issues with Contiguous Allocation:

    • External Fragmentation: Total memory space exists but non-contiguous
      • Example: Having 3 free blocks of 3MB each but unable to allocate 8MB
    • Internal Fragmentation: Allocated memory is larger than required
      • Example: 10MB partition assigned to 6MB process wastes 4MB
  • Memory Compaction:

    • Technique to reduce external fragmentation
    • Involves moving allocated blocks to one end of memory
    • Creates one large free block
    • Expensive operation (requires copying and updating addresses)
    • Example: Converting fragmented memory [P1][free][P2][free][P3] to [P1][P2][P3][free]

Static and Dynamic Partitions

  • Static Partitioning:

    • Fixed number and size of partitions determined at system initialization
    • Remains constant during system operation
    • Simple to implement but inflexible
    • Types:
      • Equal-size partitions: Each partition has the same size
        • Example: 128MB RAM divided into 8 partitions of 16MB each
      • Unequal-size partitions: Different partition sizes for different needs
        • Example: 128MB RAM with partitions of 8MB, 16MB, 32MB, and 72MB
  • Dynamic Partitioning:

    • Partitions created during runtime as needed
    • Size matched to process requirements
    • More flexible but requires more complex management
    • Example scenario:
      • Initial: 64MB free memory
      • Allocate 15MB to Process A: [A:15MB][free:49MB]
      • Allocate 25MB to Process B: [A:15MB][B:25MB][free:24MB]
      • Process A terminates: [free:15MB][B:25MB][free:24MB]
      • Process C requests 20MB: [C:20MB][free:5MB][B:25MB][free:14MB]
  • Issues with Static Partitioning:

    • Wastage due to internal fragmentation
    • Limited number of processes
    • Inefficient handling of varying process sizes
  • Issues with Dynamic Partitioning:

    • External fragmentation requiring compaction
    • Overhead of managing partition creation/deletion
    • Need for efficient allocation algorithms

Paging

  • Definition: A memory management scheme that eliminates the need for contiguous physical memory allocation by dividing both physical memory and logical memory into fixed-size blocks.

  • Key Concepts:

    • Page: Fixed-size block of logical memory
    • Frame: Fixed-size block of physical memory
    • Page table: Data structure mapping logical pages to physical frames
    • Example: System with 4KB pages, 1GB physical memory would have 262,144 frames
  • Address Translation:

    • Logical address divided into page number and page offset
    • Page number used as index to page table to find frame number
    • Frame number combined with page offset gives physical address
    • Example calculation:
      • 32-bit address with 4KB pages
      • Page size = 4KB = 2^12 bytes
      • Page offset = 12 bits
      • Page number = 20 bits (32 - 12)
      • Logical address 0x00403204:
        • Page number = 0x00403 (top 20 bits)
        • Offset = 0x204 (bottom 12 bits)
        • If page table maps page 0x00403 to frame 0x00710
        • Physical address = 0x00710204
  • Page Table Implementation:

    • Single-level page table: Direct array indexed by page number
    • Multi-level page table: Hierarchy of tables to reduce table size
      • Example: Two-level table with 10-bit top-level index, 10-bit second-level index, 12-bit offset
    • Inverted page table: One entry per frame instead of per page
    • Hashed page table: Hash function maps virtual page to page table entry
  • Translation Lookaside Buffer (TLB):

    • Fast hardware cache of recent page table entries
    • Reduces overhead of page table lookups
    • Example TLB lookup process:
      1. Check if page number in TLB
      2. If hit, use frame number from TLB (fast)
      3. If miss, consult page table (slow) and update TLB
    • Typical hit rates: 98-99% in most systems
  • Advantages of Paging:

    • No external fragmentation
    • Simple allocation (maintain list of free frames)
    • Support for sharing code pages between processes
  • Disadvantages of Paging:

    • Internal fragmentation (last page of process)
    • Page table overhead (memory for table)
    • Multiple memory accesses for address translation

Demand Paging

  • Definition: A paging technique where pages are loaded into memory only when demanded (referenced) during execution.

  • Key Concepts:

    • Valid/Invalid bit: Indicates if page is in memory
    • Page fault: Occurs when a program accesses a page not in physical memory
    • Page replacement: Selecting which page to remove when memory is full
    • Example: A process starts with all pages marked invalid; as execution proceeds, pages are loaded on demand
  • Page Fault Handling Process:

    1. CPU generates memory reference

    2. MMU detects invalid page (page fault)

    3. OS traps to page fault handler

    4. OS finds free frame (or selects victim frame)

    5. OS reads required page from disk to frame

    6. OS updates page table

    7. OS restarts the instruction that caused fault

    • Example timing: detect fault (1μs), trap to handler (2μs), find frame (10μs), read page (10ms), update table (5μs), restart (1μs) = ~10ms total
  • Page Replacement Algorithms: (detailed in next section)

  • Performance Considerations:

    • Effective Access Time (EAT): p × (page fault time) + (1-p) × (memory access time)
      • Where p is the page fault rate
      • Example: If memory access = 100ns, page fault = 10ms, p = 0.001
      • EAT = 0.001 × 10ms + 0.999 × 100ns = 10μs + 99.9ns ≈ 10.1μs
  • Copy-on-Write:

    • Optimization to defer page copying
    • Multiple processes initially share same physical pages
    • When process attempts to modify page, a copy is made
    • Example: fork() in UNIX creates process sharing pages with parent until modification
  • Memory-Mapped Files:

    • File mapped to virtual address space
    • Pages loaded on demand as accessed
    • Changes written back to file when pages replaced
    • Example: 100MB file mapped to address space, only accessed portions loaded in RAM

Page Replacement Algorithms

  • Definition: Strategies to decide which page to evict from memory when a page fault occurs and all frames are occupied.

  • First-In-First-Out (FIFO):

    • Evicts the oldest page first
    • Example sequence: Memory with 3 frames, reference string: 1,2,3,4,1,2,5,1,2,3,4,5
      • Initial state: empty frames
      • After 1,2,3: frames contain [1,2,3]
      • Reference 4: replace 1 with 4 → [4,2,3] (page fault)
      • Reference 1: replace 2 with 1 → [4,1,3] (page fault)
      • Reference 2: replace 3 with 2 → [4,1,2] (page fault)
      • Reference 5: replace 4 with 5 → [5,1,2] (page fault)
      • Total: 7 page faults
    • Issue: Belady’s Anomaly - increasing frames can increase page faults
  • Optimal (OPT or MIN):

    • Replace page that won’t be used for longest time
    • Example: Same reference string with 3 frames
      • Initial state: empty frames
      • After 1,2,3: frames contain [1,2,3]
      • Reference 4: replace 1 with 4 → [4,2,3] (page fault)
      • Reference 1: replace 2 with 1 → [4,1,3] (page fault)
      • Reference 2: replace 3 with 2 → [4,1,2] (page fault)
      • Reference 5: replace 4 with 5 → [5,1,2] (page fault)
      • Total: 6 page faults
    • Advantage: Theoretically optimal
    • Disadvantage: Requires future knowledge, not implementable in practice
  • Least Recently Used (LRU):

    • Replace page that hasn’t been used for longest time
    • Example: Same reference string with 3 frames
      • Initial state: empty frames
      • After 1,2,3: frames contain [1,2,3]
      • Reference 4: replace 1 with 4 → [4,2,3] (page fault)
      • Reference 1: replace 2 with 1 → [4,1,3] (page fault)
      • Reference 2: replace 3 with 2 → [4,1,2] (page fault)
      • Reference 5: replace 4 with 5 → [5,1,2] (page fault)
      • Total: 7 page faults
    • Implementation approaches:
      • Counter-based: Each page entry has timestamp of last use
      • Stack-based: Stack of page numbers with most recently used on top
    • Expensive to implement in hardware
  • Clock (Second Chance):

    • Approximation of LRU using reference bit
    • Pages arranged in circular list
    • When replacement needed, check reference bit:
      • If 0, replace page
      • If 1, clear bit and move to next page
    • Example: 4 frames, reference string: 1,2,3,4,5,3,4,1,6,7,8
      • After 1,2,3,4: frames contain [1(1),2(1),3(1),4(1)] (ref bit in parentheses)
      • Reference 5: check 1→2→3→4, all have ref=1, clear all, then replace 1 → [5(1),2(0),3(0),4(0)]
      • Reference 3: set ref bit → [5(1),2(0),3(1),4(0)]
      • Total: 8 page faults
  • Least Frequently Used (LFU):

    • Replace page with lowest reference count
    • Example: frames [1,2,3,4], reference counts [3,1,4,2]
    • On page fault, page 2 would be replaced
    • Problem: Pages loaded long ago accumulate high counts
  • Most Frequently Used (MFU):

    • Replace page with highest reference count
    • Logic: High count means page completed its purpose
    • Less commonly used than other algorithms
  • Not Recently Used (NRU):

    • Uses reference bit and modify bit
    • Classes of pages (from lowest to highest priority for replacement):
      1. Not referenced, not modified
      2. Not referenced, modified
      3. Referenced, not modified
      4. Referenced, modified
    • Example: After scanning, if we have pages in classes:
      • Page A: (0,0) - not referenced, not modified
      • Page B: (0,1) - not referenced, modified
      • Page C: (1,0) - referenced, not modified
      • Page D: (1,1) - referenced, modified
      • NRU would replace Page A

Thrashing

  • Definition: A state where the system spends more time swapping pages than executing instructions.

  • Causes:

    • Too many processes in memory
    • Insufficient frames allocated to a process
    • Poor page replacement algorithm
    • Example: System with 128MB RAM running 10 processes each needing 20MB working set
  • Symptoms:

    • High page fault rate
    • Low CPU utilization
    • Increased disk I/O
    • Degraded system performance
    • Example metrics: Page fault rate >10/second, CPU utilization <30%
  • Working Set Model:

    • Set of pages that a process is actively using
    • Defined by working set window (time interval)
    • Example: Process references pages [1,2,3,4,1,2,5,1,2,3] in last 1000 instructions
    • Working set = {1,2,3,5}
  • Prevention Strategies:

    • Working Set Window: Allocate enough frames to hold working set
    • Page Fault Frequency (PFF): Adjust allocation based on fault rate
      • Example: If fault rate > threshold, allocate more frames
      • If fault rate < threshold, remove frames
    • Local vs Global Replacement:
      • Local: Process can only replace its own pages
      • Global: Process can replace any process’s pages
      • Example: Local might prevent thrashing in one process but limit utilization
      • Global might improve utilization but risk system-wide thrashing
  • Load Control:

    • Swap out entire processes when thrashing detected
    • Example: System detects thrashing, suspends lowest priority process

Segmentation

  • Definition: Memory management scheme that supports user view of memory by dividing programs into logical segments (code, data, stack, etc.).

  • Key Concepts:

    • Segment: Logical unit such as main program, procedure, function, method, object, array, stack, etc.
    • Segment table: Maps logical segments to physical addresses
    • Segment selector: Identifies segment in a logical address
    • Example segments in typical program: code, global variables, heap, stack
  • Logical Address Structure:

    • <segment-number, offset>
    • Segment number used as index to segment table
    • Segment table entry contains:
      • Base address: Starting physical address
      • Limit: Length of segment
    • Example: Logical address (2, 100) with segment table:
      • Segment 2 has base=4000, limit=400
      • Physical address = 4000 + 100 = 4100
      • Check: 100 < 400, so access is valid
  • Protection and Sharing:

    • Each segment table entry includes protection bits
    • Commonly: read, write, execute permissions
    • Segments can be shared between processes
    • Example: Segment table entries
      • Code segment: base=5000, limit=2000, protection=read+execute
      • Data segment: base=7000, limit=1000, protection=read+write
  • Advantages:

    • Supports user’s view of memory
    • No internal fragmentation within segments
    • Easy sharing and protection
    • Segments can grow independently
  • Disadvantages:

    • External fragmentation between segments
    • Segment sizes vary, complicating allocation
    • Dynamic growth may require relocation

Segmentation with Paging

  • Definition: A hybrid memory management scheme combining segmentation’s logical view with paging’s efficient physical management.

  • Implementation:

    • Logical memory divided into variable-size segments
    • Each segment further divided into fixed-size pages
    • Two-level address translation:
      1. Segment table maps to page table
      2. Page table maps to physical frame
    • Example logical address: <segment-number, page-number, offset>
  • Translation Process:

    • Segment number indexes segment table to find page table
    • Page number indexes page table to find frame
    • Offset used directly
    • Example: Logical address (1, 3, 10) with 4KB pages
      • Segment 1’s page table maps page 3 to frame 28
      • Physical address = (28 × 4KB) + 10 = 114,698
  • Intel x86 Architecture Example:

    • Uses segmentation with paging
    • Segment registers (CS, DS, SS, etc.)
    • Global Descriptor Table (GDT) as segment table
    • Each segment can have its own page table
    • Example: Code in segment CS, offset 0x1234
      • CS register contains segment selector pointing to GDT entry
      • GDT entry contains base of segment’s linear address space
      • Linear address goes through paging to get physical address
  • Advantages:

    • Combines benefits of both systems
    • No external fragmentation (from paging)
    • Logical organization (from segmentation)
    • Protection at segment level
  • Disadvantages:

    • Complex implementation
    • Overhead of two mapping tables
    • Complexity in address translation

File System Interface

File Concepts

  • Definition: A file is a named collection of related information recorded on secondary storage.

  • File Attributes:

    • Name: Human-readable identifier
    • Type: Indicates the file’s purpose (e.g., executable, text)
    • Location: Physical location on device
    • Size: Current file size
    • Protection: Access control information
    • Time stamps: Creation, last modification, last access
    • Owner: User who created the file
    • Example attributes in Unix: -rwxr-xr-x 1 user group 4096 Jan 1 10:00 myfile
  • File Types:

    • Regular files: Program or data files
      • Text files: Lines of characters
      • Binary files: Structured according to program needs
    • Directories: System files for file system structure
    • Special files: Device files or other OS-specific structures
      • Block special files: Disk drives
      • Character special files: Terminals, printers
    • Example extension conventions: .txt (text), .exe (executable), .jpg (image)
  • File Operations:

    • Create: Allocate space, add directory entry
    • Write: Transfer data to file
    • Read: Transfer data from file
    • Reposition/Seek: Move to specific location
    • Delete: Release space, remove directory entry
    • Truncate: Delete content but keep attributes
    • Example sequence: Create new file, write data, close file, reopen later, read data, close file
  • File Structure:

    • Byte sequence: No structure (Unix/Linux)
    • Record sequence: Fixed or variable length records
    • Tree structure: Hierarchical arrangement (e.g., XML)
    • Example: Database file might have fixed 100-byte records
  • Internal File Structure:

    • Disk structure: Physical layout of file on storage
    • Logical structure: How OS presents file to applications
    • Example: A logically sequential file may be physically distributed

Access Methods and Protection

  • File Access Methods:

    • Sequential Access: Records accessed in order (tape model)
      • Operations: read_next(), write_next()
      • Example use: Batch processing, sequential log files
    • Direct Access: Random access to any block (disk model)
      • Operations: read(n), write(n) where n is block number
      • Example use: Database systems, random record lookup
    • Indexed Access: Index provides pointers to various blocks
      • Example: File with index on employee_id field for fast lookups
    • Memory-Mapped Files: File mapped into address space
      • Read/write via memory operations
      • Example: Large data structures loaded into virtual memory
  • Example Access Patterns:

    • Sequential file processing in C:

      FILE *fp = fopen("file.txt", "r");
      char buffer[100];
      while (fgets(buffer, 100, fp) != NULL) {
          // Process line
      }
      fclose(fp);
    • Random access in C:

      FILE *fp = fopen("records.dat", "r+");
      fseek(fp, record_num * sizeof(record), SEEK_SET);
      fread(&record, sizeof(record), 1, fp);
      // Modify record
      fseek(fp, record_num * sizeof(record), SEEK_SET);
      fwrite(&record, sizeof(record), 1, fp);
      fclose(fp);
  • File Protection:

    • Access Control Mechanisms:
      • User-based protection: Access lists for each user
      • Group-based protection: Users in groups with permissions
      • Role-based protection: Users assigned roles with permissions
    • Access Rights:
      • Read: View file contents
      • Write: Modify file
      • Execute: Run file as program
      • Append: Add to file
      • Delete: Remove file
      • List: View file name and attributes
    • Implementation Examples:
      • Unix/Linux permissions: Owner, group, world with rwx bits
        • Example: chmod 755 file (rwx for owner, r-x for group and others)
      • Access Control Lists (ACLs): Fine-grained permissions
        • Example: file.txt: user1:rw-, user2:r—, group1:r—
      • Capability-based systems: User possesses capabilities (tokens)
        • Example: User has token granting write access to specific file
  • Protection in Distributed Systems:

    • Authentication across network
    • Encryption for data in transit
    • Example: Kerberos for authentication, SSL/TLS for encryption

File System Implementation

File System Structure

  • Definition: The organization and algorithms used to store and retrieve data on secondary storage.

  • Layered Architecture:

    • Application Programs: User-level file operations
    • Logical File System: Manages metadata, directories
    • File Organization Module: Logical blocks to physical blocks
    • Basic File System: Issues commands to device drivers
    • I/O Control: Device drivers and interrupt handlers
    • Devices: Physical storage hardware
    • Example: Reading a file traverses all layers from application to device
  • In-Memory File System Structures:

    • Mount table: Information about mounted file systems
    • Directory cache: Recently accessed directory information
    • System-wide open file table: Tracks open files across all processes
    • Per-process open file table: Files opened by specific process
    • Buffers: Cache recently accessed disk blocks
    • Example: Opening a file creates entries in both system and process file tables
  • On-Disk File System Structures:

    • Boot control block: Information for booting OS
    • Volume control block: Volume details (size, free blocks, etc.)
    • Directory structure: Organizes files
    • File control block (inode): Details about file
    • Example: Ext4 filesystem with superblock, group descriptors, inode tables, data blocks
  • Virtual File Systems (VFS):

    • Provides uniform interface to multiple file systems
    • Object-oriented approach with standard operations
    • Example: Linux VFS supports ext4, XFS, FAT32, NTFS through common interface

Allocation Methods

  • Definition: Techniques used to allocate disk space to files.

  • Contiguous Allocation:

    • Each file occupies contiguous blocks
    • File defined by starting block and length
    • Example: File A starts at block A and uses 3 continuous blocks
    • Advantages:
      • Simple implementation
      • Excellent read performance (single-seek access)
    • Disadvantages:
      • External fragmentation
      • Difficult file growth
      • Need for compaction
  • Linked Allocation:

    • Each file is a linked list of blocks
    • Directory contains pointer to first and last blocks
    • Each block contains pointer to next block
    • Example: File with blocks [10→20→30→NULL]
    • Advantages:
      • No external fragmentation
      • Files can grow dynamically
    • Disadvantages:
      • Random access inefficient
      • Space required for pointers
      • Reliability issues (broken links)
    • File Allocation Table (FAT):
      • Stores linked list in separate table
      • Example: FAT[10]=20, FAT[20]=30, FAT[30]=-1 (end of file)
      • Used in MS-DOS, Windows 9x
  • Indexed Allocation:

    • Index block contains pointers to file blocks
    • Directory contains pointer to index block
    • Example: Index block [10,20,30] points to three data blocks
    • Advantages:
      • Supports direct access
      • No external fragmentation
    • Disadvantages:
      • Index block overhead
      • Maximum file size limited by index block size
    • Handling Large Files:
      • Linked scheme: Link multiple index blocks
      • Multilevel index: Index of indices
      • Combined scheme (Unix inode):
        • Direct blocks: First 12 pointers directly to data
        • Single indirect: Points to block of pointers
        • Double indirect: Points to block of single indirects
        • Triple indirect: Points to block of double indirects
        • Example: 4KB blocks, 4-byte pointers
          • Direct (12 blocks): 48KB
          • Single indirect (1024 pointers): 4MB
          • Double indirect (1024 × 1024 pointers): 4GB
          • Triple indirect (1024 × 1024 × 1024 pointers): 4TB
  • Free Space Management:

    • Bit vector/map: One bit per block (1=free, 0=allocated)
      • Example: 01001100… means blocks 1,4,5 are free
    • Linked list: Link all free blocks
      • Example: Free list head→block10→block20→NULL
    • Grouping: First free block contains addresses of n free blocks
    • Counting: Store address of first free block and count of contiguous free blocks
      • Example: (block 15, count 5) means blocks 15-19 are free

Directory Implementation

  • Definition: Methods for organizing and storing information about files.

  • Directory Entry Contents:

    • File name
    • File attributes
    • File location information
    • Example Unix directory entry: inode number + file name
  • Linear List:

    • Simple array of file entries
    • Example: [(name1,metadata1), (name2,metadata2), …]
    • Advantages: Simple implementation
    • Disadvantages: Linear search time (O(n))
    • Finding a file requires scanning the entire directory
  • Hash Table:

    • Hash function maps file names to directory entries
    • Example: hash(“myfile.txt”) = 42, direct lookup at position 42
    • Advantages: Constant time lookup (O(1) average)
    • Disadvantages: Collision handling, fixed table size
    • Common in systems requiring fast lookups
  • Directory Size:

    • Small directories (few entries): Linear list sufficient
    • Medium directories: Hash table improves performance
    • Large directories: B-tree or similar structure
    • Example: A directory with 10,000 files might use B-tree for O(log n) lookups
  • Directory Operations:

    • Search for a file
    • Create a file
    • Delete a file
    • List directory contents
    • Rename a file
    • Traverse the file system
    • Example complexity: Linear list (O(n) search), Hash table (O(1) average search)
  • Directory Caching:

    • Recently used directories cached in memory
    • Example: Opening file “/home/user/documents/report.txt” caches each component
    • Subsequent access to files in /home/user/documents is faster
  • Name Resolution:

    • Process of mapping file path to actual file location
    • Example: Resolving “/home/user/file.txt”
      1. Find root directory (/)
      2. Look up “home” in root directory
      3. Look up “user” in home directory
      4. Look up “file.txt” in user directory
      5. Access file data using retrieved metadata
  • Path Name Translation:

    • Absolute path: Starts from root (e.g., /home/user/file.txt)
    • Relative path: Starts from current directory (e.g., docs/file.txt)
    • Example: In current directory /home, relative path user/file.txt is equivalent to absolute path /home/user/file.txt