A screenshot of Ruby code featuring the #presence method, with text highlighting how it simplifies identifying `nil` values.

TWIL is our weekly series designed to foster a culture of continuous learning in software development. This week, Katie takes us through the elegantly simple intricacies of Ruby #presence, a nuanced method that sifts out the emptiness and returns either the value itself or nil.

Ruby #presence

Using thing.presence vs the truthiness/falsiness of thing directly: #presence returns the value or nil but importantly excludes empty values.

It is essentially just shorthand for thing.present? ? thing : nil.

# Empty string

> "" ? "truthy" : "falsy"
"truthy"

> "".present?
false

> "".presence
nil

# Empty array

> [] ? "truthy" : "falsy"
"truthy"

> [].present?
false

> [].presence
nil

# Non-empty string

> "thing" ? "truthy" : "falsy"
"truthy"

> "thing".present?
true

> "thing".presence
"thing"

# Non-empty array

> [1,2,3] ? "truthy" : "falsy"
"truthy"

> [1,2,3].present?
true

> [1,2,3].presence
[1, 2, 3]

# Nil

> nil ? "truthy" : "falsy"
"falsy"

> nil.present?
false

> nil.presence
nil

# True

> true ? "truthy" : "falsy"
"truthy"

> true.present?
true

> true.presence
true

# False

> false ? "truthy" : "falsy"
"falsy"

> false.present?
false

> false.presence
nil

Resources

  • Ruby
Katie Linero's profile picture
Katie Linero

Senior Software Engineer

Related Posts

A conceptual illustration shows a chat bubble icon at the center of a complex maze, representing the challenges of evaluating Large Language Models for commercial applications. The intricate blue-tinted labyrinth symbolizes the many considerations Cuttlesoft navigates when implementing AI solutions in enterprise software - from API integration and cost management to security compliance. This visual metaphor captures the complexity of choosing the right LLM technology for custom software development across healthcare, finance, and enterprise sectors. The centered message icon highlights Cuttlesoft's focus on practical communication AI applications while the maze's structure suggests the methodical evaluation process used to select appropriate AI tools and frameworks for client solutions.
September 12, 2024 • Frank Valcarcel

Benchmarking AI: Evaluating Large Language Models (LLMs)

Large Language Models like GPT-4 are revolutionizing AI, but their power demands rigorous assessment. How do we ensure these marvels perform as intended? Welcome to the crucial world of LLM evaluation.

Dynamic team of software developers from Cuttlesoft, highlighting their organizational maturity enabled by their re-branded image.
August 18, 2022 • Frank Valcarcel

The New Cuttlesoft

To reimagine Cuttlesoft’s brand, we partnered with the experts at Focus Lab. With their guidance, we identified the ways Cuttlesoft was failing to meet its full potential.