๐—–๐—ผ๐—บ๐—ฝ๐—ฎ๐—ฟ๐—ฒ ๐—ฆ๐—ฒ๐—พ๐˜‚๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐˜ƒ๐˜€. ๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น ๐˜„๐—ถ๐˜๐—ต ๐—ง๐—ฎ๐˜€๐—ธ.๐—ช๐—ต๐—ฒ๐—ป๐—”๐—น๐—น

๐—ฆ๐˜‚๐—ฝ๐—ฒ๐—ฟ๐—ฐ๐—ต๐—ฎ๐—ฟ๐—ด๐—ถ๐—ป๐—ด ๐—”๐˜€๐˜†๐—ป๐—ฐ ๐—ถ๐—ป ๐—–#: ๐—–๐—ผ๐—บ๐—ฝ๐—ฎ๐—ฟ๐—ฒ ๐—ฆ๐—ฒ๐—พ๐˜‚๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐˜ƒ๐˜€. ๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น ๐˜„๐—ถ๐˜๐—ต ๐—ง๐—ฎ๐˜€๐—ธ.๐—ช๐—ต๐—ฒ๐—ป๐—”๐—น๐—น โšก

Hey Devs! ๐Ÿ‘‹ Have you ever wanted to call multiple services or run multiple tasks at once without waiting for each one to finish before starting the next? Thatโ€™s exactly where ๐—ง๐—ฎ๐˜€๐—ธ.๐—ช๐—ต๐—ฒ๐—ป๐—”๐—น๐—น comes to the rescue! Instead of running each task one by one (which can be slow if each task takes a while), you can fire them all simultaneously, wait for all to complete, and then process the results in bulk.

Below, we have a ๐˜‰๐˜ฆ๐˜ฏ๐˜ค๐˜ฉ๐˜ฎ๐˜ข๐˜ณ๐˜ฌ๐˜‹๐˜ฐ๐˜ต๐˜•๐˜ฆ๐˜ต setup that measures:

  1. ๐—ฆ๐—ฒ๐—พ๐˜‚๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐—ฟ๐—ฒ๐—พ๐˜‚๐—ฒ๐˜€๐˜๐˜€ (one after another).
  2. ๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น ๐—ฟ๐—ฒ๐—พ๐˜‚๐—ฒ๐˜€๐˜๐˜€ (all at once using ๐˜›๐˜ข๐˜ด๐˜ฌ.๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ๐˜ˆ๐˜ญ๐˜ญ).

By measuring both approaches, weโ€™ll see how ๐˜›๐˜ข๐˜ด๐˜ฌ.๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ๐˜ˆ๐˜ญ๐˜ญ can significantly reduce the overall time when dealing with I/O-bound tasks such as HTTP calls. Letโ€™s dive in! ๐ŸŠโ€โ™‚๏ธ

๐—ช๐—ต๐˜† ๐˜‚๐˜€๐—ฒ ๐™๐™–๐™จ๐™ .๐™’๐™๐™š๐™ฃ๐˜ผ๐™ก๐™ก? ๐Ÿ’ก

  1. ๐—ง๐—ถ๐—บ๐—ฒ ๐—ฆ๐—ฎ๐˜ƒ๐—ถ๐—ป๐—ด๐˜€ If each request takes 500 ms, and you have 10 requests, a sequential approach might end up taking ~5000 ms. With ๐˜›๐˜ข๐˜ด๐˜ฌ.๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ๐˜ˆ๐˜ญ๐˜ญ, it could take around 500 ms total (assuming all 10 run simultaneously)!

  2. ๐—–๐—น๐—ฒ๐—ฎ๐—ป๐—ฒ๐—ฟ ๐—–๐—ผ๐—ฑ๐—ฒ Instead of a big ๐˜ง๐˜ฐ๐˜ณ๐˜ฆ๐˜ข๐˜ค๐˜ฉ that awaits every single call, you collect all tasks in a list and await them together. Easy to read and maintain!

  3. ๐—•๐—ฒ๐˜๐˜๐—ฒ๐—ฟ ๐—ฅ๐—ฒ๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ ๐—จ๐˜๐—ถ๐—น๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป While one request is waiting on network I/O, others can proceed. This helps maximize concurrency without writing complex multi-threaded logic.

๐—˜๐˜…๐—ฝ๐—ฒ๐—ฐ๐˜๐—ฒ๐—ฑ ๐—ข๐˜‚๐˜๐—ฐ๐—ผ๐—บ๐—ฒ & ๐—ง๐—ฎ๐—ธ๐—ฒ๐—ฎ๐˜„๐—ฎ๐˜†๐˜€ ๐Ÿค” ๐—ฆ๐—ฒ๐—พ๐˜‚๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น (๐˜Ž๐˜ฆ๐˜ต๐˜ˆ๐˜ญ๐˜ญ๐˜š๐˜ฆ๐˜ฒ๐˜ถ๐˜ฆ๐˜ฏ๐˜ต๐˜ช๐˜ข๐˜ญ๐˜ˆ๐˜ด๐˜บ๐˜ฏ๐˜ค): Runs each HTTP-like call one by one, typically summing all delays (e.g., 10 ร— 500 ms).

๐—ฃ๐—ฎ๐—ฟ๐—ฎ๐—น๐—น๐—ฒ๐—น (๐˜Ž๐˜ฆ๐˜ต๐˜ˆ๐˜ญ๐˜ญ๐˜—๐˜ข๐˜ณ๐˜ข๐˜ญ๐˜ญ๐˜ฆ๐˜ญ๐˜ˆ๐˜ด๐˜บ๐˜ฏ๐˜ค): Initiates all tasks at once and waits for them together, drastically reducing total time if each call is I/O-bound.

Feel free to share your thoughts or experiences with parallel async calls! Do you have any real-world story of boosting performance with ๐˜›๐˜ข๐˜ด๐˜ฌ.๐˜ž๐˜ฉ๐˜ฆ๐˜ฏ๐˜ˆ๐˜ญ๐˜ญ? Let us know in the comments! ๐ŸŽ‰

image description

View on Linkedin

linkedin ยทgithub
Copyright ยฉ 2025 Igor Carvalho
contact@ayresigo.dev
Powered by Urara