erlang concurrency

Erlang's concurrency model based on lightweight processes and message passing enables building massively scalable systems. Processes are isolated with no shared memory, communicating asynchronously through messages. This model eliminates concurrency bugs common in shared-memory systems.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "erlang concurrency" with this command: npx skills add thebushidocollective/han/thebushidocollective-han-erlang-concurrency

Erlang Concurrency

Introduction

Erlang's concurrency model based on lightweight processes and message passing enables building massively scalable systems. Processes are isolated with no shared memory, communicating asynchronously through messages. This model eliminates concurrency bugs common in shared-memory systems.

The BEAM VM efficiently schedules millions of processes, each with its own heap and mailbox. Process creation is fast and cheap, enabling "process per entity" designs. Links and monitors provide failure detection, while selective receive enables flexible message handling patterns.

This skill covers process creation and spawning, message passing patterns, process links and monitors, selective receive, error propagation, concurrent design patterns, and building scalable concurrent systems.

Process Creation and Spawning

Create lightweight processes for concurrent task execution.

%% Basic process spawning simple_spawn() -> Pid = spawn(fun() -> io:format("Hello from process pn", [self()]) end), Pid.

%% Spawn with arguments spawn_with_args(Message) -> spawn(fun() -> io:format("Message: pn", [Message]) end).

%% Spawn and register spawn_registered() -> Pid = spawn(fun() -> loop() end), register(my_process, Pid), Pid.

loop() -> receive stop -> ok; Msg -> io:format("Received: pn", [Msg]), loop() end.

%% Spawn link (linked processes) spawn_linked() -> spawn_link(fun() -> timer:sleep(1000), io:format("Linked process done~n") end).

%% Spawn monitor spawn_monitored() -> {Pid, Ref} = spawn_monitor(fun() -> timer:sleep(500), exit(normal) end), {Pid, Ref}.

%% Process pools create_pool(N) -> [spawn(fun() -> worker_loop() end) || _ <- lists:seq(1, N)].

worker_loop() -> receive {work, Data, From} -> Result = process_data(Data), From ! {result, Result}, worker_loop(); stop -> ok end.

process_data(Data) -> Data * 2.

%% Parallel map pmap(F, List) -> Parent = self(), Pids = [spawn(fun() -> Parent ! {self(), F(X)} end) || X <- List], [receive {Pid, Result} -> Result end || Pid <- Pids].

%% Fork-join pattern fork_join(Tasks) -> Self = self(), Pids = [spawn(fun() -> Result = Task(), Self ! {self(), Result} end) || Task <- Tasks], [receive {Pid, Result} -> Result end || Pid <- Pids].

Lightweight processes enable massive concurrency with minimal overhead.

Message Passing Patterns

Processes communicate through asynchronous message passing without shared memory.

%% Send and receive send_message() -> Pid = spawn(fun() -> receive {From, Msg} -> io:format("Received: pn", [Msg]), From ! {reply, "Acknowledged"} end end), Pid ! {self(), "Hello"}, receive {reply, Response} -> io:format("Response: pn", [Response]) after 5000 -> io:format("Timeout~n") end.

%% Request-response pattern request(Pid, Request) -> Ref = make_ref(), Pid ! {self(), Ref, Request}, receive {Ref, Response} -> {ok, Response} after 5000 -> {error, timeout} end.

server_loop() -> receive {From, Ref, {add, A, B}} -> From ! {Ref, A + B}, server_loop(); {From, Ref, {multiply, A, B}} -> From ! {Ref, A * B}, server_loop(); stop -> ok end.

%% Publish-subscribe start_pubsub() -> spawn(fun() -> pubsub_loop([]) end).

pubsub_loop(Subscribers) -> receive {subscribe, Pid} -> pubsub_loop([Pid | Subscribers]); {unsubscribe, Pid} -> pubsub_loop(lists:delete(Pid, Subscribers)); {publish, Message} -> [Pid ! {message, Message} || Pid <- Subscribers], pubsub_loop(Subscribers) end.

%% Pipeline pattern pipeline(Data, Functions) -> lists:foldl(fun(F, Acc) -> F(Acc) end, Data, Functions).

concurrent_pipeline(Data, Stages) -> Self = self(), lists:foldl(fun(Stage, AccData) -> Pid = spawn(fun() -> Result = Stage(AccData), Self ! {result, Result} end), receive {result, R} -> R end end, Data, Stages).

Message passing enables safe concurrent communication without locks.

Links and Monitors

Links bidirectionally connect processes while monitors provide one-way observation.

%% Process linking link_example() -> process_flag(trap_exit, true), Pid = spawn_link(fun() -> timer:sleep(1000), exit(normal) end), receive {'EXIT', Pid, Reason} -> io:format("Process exited: pn", [Reason]) end.

%% Monitoring monitor_example() -> Pid = spawn(fun() -> timer:sleep(500), exit(normal) end), Ref = monitor(process, Pid), receive {'DOWN', Ref, process, Pid, Reason} -> io:format("Process down: pn", [Reason]) end.

%% Supervisor pattern supervisor() -> process_flag(trap_exit, true), Worker = spawn_link(fun() -> worker() end), supervisor_loop(Worker).

supervisor_loop(Worker) -> receive {'EXIT', Worker, _Reason} -> NewWorker = spawn_link(fun() -> worker() end), supervisor_loop(NewWorker) end.

worker() -> receive crash -> exit(crashed); work -> worker() end.

Links and monitors enable building fault-tolerant systems with automatic failure detection.

Best Practices

Create processes liberally as they are lightweight and cheap to spawn

Use message passing exclusively for inter-process communication without shared state

Implement proper timeouts on receives to prevent indefinite blocking

Use monitors for one-way observation when bidirectional linking unnecessary

Keep process state minimal to reduce memory usage per process

Use registered names sparingly as global names limit scalability

Implement proper error handling with links and monitors for fault tolerance

Use selective receive to handle specific messages while leaving others queued

Avoid message accumulation by handling all message patterns in receive clauses

Profile concurrent systems to identify bottlenecks and optimize hot paths

Common Pitfalls

Creating too few processes underutilizes Erlang's concurrency model

Not using timeouts in receive causes indefinite blocking on failure

Accumulating messages in mailboxes causes memory leaks and performance degradation

Using shared ETS tables as mutex replacement defeats isolation benefits

Not handling all message types causes mailbox overflow with unmatched messages

Forgetting to trap exits in supervisors prevents proper error handling

Creating circular links causes cascading failures without proper supervision

Using processes for fine-grained parallelism adds overhead without benefits

Not monitoring spawned processes loses track of failures

Overusing registered names creates single points of failure and contention

When to Use This Skill

Apply processes for concurrent tasks requiring isolation and independent state.

Use message passing for all inter-process communication in distributed systems.

Leverage links and monitors to build fault-tolerant supervision hierarchies.

Create process pools for concurrent request handling and parallel computation.

Use selective receive for complex message handling protocols.

Resources

  • Erlang Concurrency Tutorial

  • Learn You Some Erlang - Concurrency

  • Erlang Process Manual

  • Designing for Scalability with Erlang/OTP

  • Programming Erlang

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

android-jetpack-compose

No summary provided by upstream source.

Repository SourceNeeds Review
General

fastapi-async-patterns

No summary provided by upstream source.

Repository SourceNeeds Review
General

storybook-story-writing

No summary provided by upstream source.

Repository SourceNeeds Review
General

atomic-design-fundamentals

No summary provided by upstream source.

Repository SourceNeeds Review