Introduction
From Skepticism to Trust: A Developer’s Journey
Remember your first encounter with Java? For me, it was 1998, wrestling with Java 1.1 in a university distributed systems course. Coming from a background in assembler and C, I was skeptical. Garbage collection? It felt almost too good to be true. After years of meticulously tracking every allocated memory block, the idea that the runtime would handle memory management seemed like dangerous magic. Java felt slow, memory-hungry, and, frankly, untrustworthy.
But that skepticism would transform into appreciation as Java evolved to become increasingly brain-friendly.

The Cognitive Evolution of Java
Interestingly, this evolution from “dangerous magic” to trusted companion mirrors Java’s broader journey over the past three decades. While other languages have come and gone, Java has endured and thrived largely because of its commitment to cognitive ergonomics – making complex programming tasks more manageable for human minds.
Throughout this journey, each major release has introduced features that reduce mental overhead: from garbage collection freeing us from memory management, to generics eliminating type-casting cognitive burden, to records and pattern matching making data handling more intuitive. This consistent focus on developer cognition hasn’t just kept Java relevant; it has made it a language that grows with its developers, supporting them as they tackle increasingly complex challenges in cloud computing, microservices, and enterprise systems. As we celebrate Java’s 30th anniversary, this cognitive-friendly evolution stands out as perhaps its greatest contribution to software development.
Cognitive Benefits in Practice: A Real-World Example
Let me share a story that illustrates this evolution. In the mid-2000s, I joined a team maintaining a large enterprise application written in Java 1.4. One of my first assignments was to introduce generics as we migrated to Java 1.6. It was a massive undertaking that revealed a critical cognitive burden we had been carrying: Every time we encountered a List in the codebase, we had to hunt through the code to figure out what it contained.
// Before generics - cognitive burden
List users = getUserList();
for (Object obj : users) {
User user = (User) obj; // Hope this doesn't throw ClassCastException!
// Process user
}
// After generics - intent is clear
List<User> users = getUserList();
for (User user : users) { // Our brain can focus on the business logic
// Process user
}
This migration wasn’t just about type safety—it was about making the code’s intent immediately obvious to our brains. We discovered several potential bugs during this process, hidden in the ambiguity of raw types. Each fix felt like removing a cognitive tax we hadn’t even realized we were paying.
The Science Behind Code Comprehension
Miller’s law in psychology states that the average number of items a person can hold in their short-term memory, working memory, is 7 ± 2 items. This fact have major implications on us programmers. Naturally, we have to take this into consideration when we write code so that it will be possible for a reader to understand it.
In “Pragmatic Thinking and Learning: Refactor Your Wetware”, Andy Hunt explores how our brains process information and learn new concepts. His insights on cognitive processing and the importance of understanding our mental models are particularly relevant to software development.
In “The Programmer’s brain“, Dr. Felienne Hermans talks about how our brain works when we are working with code.
She states that our working memory can only keep track of 3 – 5 chunks of information at the same time. How big a chunk is depends on how much experience we have with the current task. The more exposure we have had to a certain area, the more our brain is able to recognize patterns and offload our working memory.
Working Memory and Code Chunks
Have you ever looked at some code and instinctively felt that something felt off without being able to pinpoint the cause of that feeling? This is most likely your brain recognizing a pattern and identifying an anomaly. However, it’s only after you have done some debugging that the cause of the problem becomes clear.
We are writing code for others to read and occasionally for machines to execute. We have to be mindful of the fact that the person reading our code maybe don’t have the same experience as we do and will therefore have a higher cognitive load trying to make sense of what we have created.
A chunk can be a individual statements, lines or entire design patterns. Usually a chunk gets bigger the more experience we have. If you are new to programming, the for-loop might take several chunks of your working memory while a more experienced developer can recognize the whole thing as a single chunk.
Fortunately, we can make our code easier to understand, to lower the cognitive load of the reader, by limiting the number of chunks of information we have to keep track of.
There is much we can do as developers, use meaningful names, don’t mix levels of abstractions, keep methods and classes focused on one thing, and so on.
We can also take advantage of the evolution of Java and the new features that help us write code that lowers the cognitive load of the reader.
Evolution of Java Through a Cognitive Lens
Java’s evolution over the past three decades reveals a fascinating pattern: each major release has systematically reduced cognitive load while increasing expressive power. Let’s explore this evolution through the lens of cognitive science.
The Early Days: Simple But Verbose (Java 1.0 – 1.4)
Initially, early Java prioritized simplicity and clarity over conciseness. This matched the cognitive needs of its time:
// Java 1.1 - Explicit iteration
Vector elements = new Vector();
elements.addElement("first");
elements.addElement("second");
for (int i = 0; i < elements.size(); i++) {
String element = (String) elements.elementAt(i);
System.out.println(element);
}
While verbose, this code had benefits:
- Every operation was explicit
- Control flow was immediately obvious
- No hidden complexity
However, it imposed significant cognitive load:
- Type casting required mental tracking of actual types
- Error handling was manual and verbose
- Collections couldn’t express their content types
- Boilerplate code cluttered the essential logic
The Type Safety Revolution (Java 5)
Subsequently, Java 5 introduced generics, a feature that dramatically reduced the mental overhead of type tracking:
// Before generics - Mental overhead
List users = getUserList();
Iterator it = users.iterator();
while (it.hasNext()) {
User user = (User) it.next(); // Mental note: this list contains Users
processUser(user); // Hope we remembered correctly!
}
// After generics - Compiler enforces our mental model
List<User> users = getUserList();
for (User user : users) { // Type safety guaranteed
processUser(user); // No mental tracking needed
}
Key cognitive improvements:
- Enhanced for-loop aligned with natural thinking about iteration
- Generics moved type safety from runtime to compile time
- Autoboxing/unboxing removed mental conversion between primitives and objects
- Static imports reduced namespace mental overhead
The Expressiveness Era (Java 8)
Most notably, Java 8’s streams and lambdas represented a quantum leap in cognitive alignment. Code that had been a maze of nested loops and conditional statements could now be written in a way that matched our mental model of the data flow:
// Traditional imperative style - Multiple mental contexts
List<Transaction> highValueTransactions = new ArrayList<>();
for (Account account : accounts) {
if (account.isActive()) {
for (Transaction transaction : account.getTransactions()) {
if (transaction.getValue() > 1000) {
highValueTransactions.add(transaction);
}
}
}
}
// Stream style - Matches our mental data flow model
List<Transaction> highValueTransactions = accounts.stream()
.filter(Account::isActive)
.flatMap(account -> account.getTransactions().stream())
.filter(transaction -> transaction.getValue() > 1000)
.collect(toList());
Cognitive benefits:
- Declarative style focuses on “what” not “how”
- Method chaining matches natural data transformation thinking
- Immutable operations reduce state tracking
- Common patterns (filter, map, reduce) become visual
The Data Modeling Renaissance (Java 14+)
The journey continued through Java 17’s records and Java 21’s sealed interfaces and pattern matching. These features enabled something I never imagined back in my C programming days: truly data-driven design that the compiler could verify. Consider this modern Java code:
// Traditional approach - High cognitive load
public class UserStatus {
private final String status;
private final String message;
private final LocalDateTime timestamp;
// Constructor, getters, equals, hashCode, toString...
// (40+ lines of boilerplate)
}
// Modern approach - Zero cognitive overhead
public record UserStatus(
String status,
String message,
LocalDateTime timestamp
) {}
// Pattern matching with sealed interfaces
public sealed interface LoginResult {
record Success(User user, String token) implements LoginResult {}
record Failure(String reason) implements LoginResult {}
record MfaRequired(String temporaryToken) implements LoginResult {}
}
// Data-driven flow control
String message = switch (loginResult) {
case Success(var user, var token) ->
"Welcome back, " + user.name();
case Failure(var reason) ->
"Login failed: " + reason;
case MfaRequired(var token) ->
"Please enter your MFA code";
};
This code aligns perfectly with how our brains naturally categorize and handle different cases. The compiler ensures we’ve handled every possibility, reducing the mental overhead of “what if” scenarios.
Furthermore, modern features reduce cognitive load through:
- Records eliminating data class boilerplate
- Pattern matching expressing natural categorization
- Switch expressions handling all cases explicitly
- Sealed interfaces ensuring exhaustive handling
Small Changes, Big Impact on Cognitive Load
Notably, while major features like records and pattern matching get most of the attention, Java’s evolution has included numerous smaller enhancements that have significantly reduced cognitive load. Here are some examples:
Stream.toList() (Java 16)
This seemingly tiny addition eliminated the mental overhead of choosing and remembering the correct collector:
// Before - Mental overhead:
// "Which collector should I use? What's the difference between toList() and toUnmodifiableList()?"
List<String> names = users.stream()
.map(User::getName)
.collect(Collectors.toList());
// After - Single chunk: "Convert stream to list"
List<String> names = users.stream()
.map(User::getName)
.toList();
Helpful NullPointer Exceptions (Java 14)
Better error messages reduced debugging cognitive load:
// Given this code:
user.getAddress().getStreet().toUpperCase()
// Before - Mental overhead: "Which part was null?"
// NullPointerException: null
// After - Clear indication:
// NullPointerException: Cannot invoke "String.toUpperCase()"
// because the return value of "Address.getStreet()" is null
Text Blocks (Java 15)
Simplified multi line string handling:
// Before - Mental overhead: Line endings, escaping, indentation
String query = "SELECT u.name, u.email " +
"FROM users u " +
"WHERE u.active = true " +
" AND u.lastLogin > ?";
// After - What you see is what you get
String query = """
SELECT u.name, u.email
FROM users u
WHERE u.active = true
AND u.lastLogin > ?
""";
Enhanced Switch Expressions (Java 14)
Made switch statements more intuitive and safer:
// Before - Mental overhead: Break statements, fall-through cases
String message;
switch (status) {
case PENDING:
message = "Still waiting";
break;
case APPROVED:
message = "Good to go";
break;
case REJECTED:
message = "Not accepted";
break;
default:
message = "Unknown status";
}
// After - Direct mapping
String message = switch (status) {
case PENDING -> "Still waiting";
case APPROVED -> "Good to go";
case REJECTED -> "Not accepted";
case UNKNOWN -> "Unknown status";
};
Type Inference for Local Variables (Java 16+)
Incremental improvements to var usage:
// Before - Mental overhead: Repetitive type information
HashMap<String, List<String>> map = new LinkedHashMap<String, List<String>>();
// After - Let the compiler handle it
var map = new LinkedHashMap<String, List<String>>();
Collections Factory Methods (Java 16+)
Simplified immutable collection creation:
// Before - Mental overhead: Multiple steps, mutability concerns
List<String> list = Collections.unmodifiableList(Arrays.asList("a", "b", "c"));
// After - Direct intention
List<String> list = List.of("a", "b", "c");
Case Rules for Switch (Java 17+)
Simplified case label rules:
// Before - Mental overhead: Remember colon syntax
switch (obj) {
case Integer i:
// handle integer
break;
case String s:
// handle string
break;
}
// After - Consistent, familiar syntax
switch (obj) {
case Integer i -> handleInteger(i);
case String s -> handleString(s);
case null -> handleNull();
}
Collectively, these smaller changes demonstrate Java’s commitment to reducing cognitive load across all aspects of the language. While each improvement might seem minor in isolation, their cumulative effect is substantial:
- Reduced Mental State Tracking
- Fewer special cases to remember
- More consistent patterns
- Clearer error messages
- More Natural Expression
- Code structure closer to mental models
- Reduced boilerplate
- More intuitive syntax
- Better Development Flow
- Fewer context switches
- Reduced debugging time
- More focused problem-solving
The impact of these changes reminds us that cognitive load improvement isn’t just about big features. Sometimes, it’s the small refinements that make our daily coding experience more productive and enjoyable.
However, this journey hasn’t been without challenges. Nevertheless, one persistent struggle has been convincing all team members to embrace these cognitive-friendly features. Developers comfortable with familiar patterns sometimes resist change, even when it would make their code more comprehensible. It’s a reminder that the most significant barriers to better code aren’t always technical—they’re human.
The Future: Project Valhalla and Beyond
Looking ahead, project Valhalla represents perhaps the most significant attempt yet to align Java’s type system with developers’ mental models. Its features address several cognitive burdens that Java developers have carried for decades:
// Current Java - Mental overhead of reference vs primitive types
Integer wrapperNum = 42; // Is this on heap or stack?
int primitiveNum = 42; // Different behavior, same concept
List<Integer> numbers = ... // Boxing/unboxing mental tax
// With Valhalla's primitive objects
primitive class Point {
double x;
double y;
}
List<Point> points = ... // No mental overhead - works like primitives
Point p1 = new Point(2, 3);
Point p2 = new Point(2, 3);
boolean same = (p1 == p2); // True - just comparing values
- Identity-free Objects
- Current Java forces developers to constantly think about object identity versus value equality
- Valhalla’s primitive classes eliminate this mental overhead – they’re simply values, like integers
- Specialization
- Currently, generic code forces developers to maintain two mental models: one for primitive types and another for reference types
- Valhalla’s specialization unifies these models
- Universal Generics
- Eliminates the cognitive overhead of remembering which types can be used with generics
- No more mental tracking of boxing/unboxing operations
This aligns perfectly with our brain’s preference for consistent, predictable patterns. Just as records eliminated the cognitive overhead of writing data classes, Valhalla will eliminate the cognitive overhead of dealing with Java’s type system fundamentals.
Impact on Development Practice
Consequently, this evolution has profoundly affected how we write and maintain code:
- Code Review
- Less time spent on boilerplate
- Focus on business logic
- Easier to spot potential issues
- Maintenance
- More self-documenting code
- Fewer places for bugs to hide
- Better alignment with business requirements
- Team Dynamics
- Shorter on-boarding time
- More productive code reviews
- Easier knowledge sharing
- Testing
- More focused test cases
- Better expression of test intentions
- Reduced test boilerplate
The key lesson from Java’s evolution is that cognitive-friendly features don’t just make code easier to write—they make it easier to think about, discuss, and maintain. Each advancement has brought the language closer to how developers naturally reason about their problems.
Real-world Impact
Our working memory can only handle 3-5 chunks of information simultaneously. This cognitive limitation directly impacts how we understand and write code. As a result, by applying cognitive-friendly practices, we reduce the number of chunks developers need to hold in memory at once.
Key Benefits
Faster On-boarding Through chunk reduction
- New developers need to maintain fewer concepts in working memory
- Each piece of code communicates its intent clearly
- Natural progression from simple to complex concepts
// Before: Multiple chunks (raw types, casting, error handling)
List items = getItems();
for (Object item : items) {
try {
DataItem dataItem = (DataItem) item;
process(dataItem);
} catch (ClassCastException e) {
handleError(e);
}
}
// After: Single conceptual chunk (processing items)
getItems()
.forEach(this::process);
Improved Understanding Through Structured Information
- Code organized in logical, digestible pieces
- Clear separation of concerns
- Pattern matching aligns with natural thinking
// Before: Multiple mental contexts to track
public Response handleRequest(Request request) {
Object result = processRequest(request);
if (result instanceof String) {
return new SuccessResponse((String) result);
} else if (result instanceof Error) {
return new ErrorResponse(((Error) result).getMessage());
} else if (result instanceof Exception) {
return new ErrorResponse("Processing failed");
}
return new ErrorResponse("Unknown result type");
}
// After: Natural pattern matching
public Response handleRequest(Request request) {
return switch (processRequest(request)) {
case Success(var message) -> new SuccessResponse(message);
case Failure(var reason) -> new ErrorResponse(reason);
case Error(var message) -> new ErrorResponse(message);
};
}
Reduced Bug Occurrence Through Cognitive Load Reduction
- Fewer mental juggling acts means fewer mistakes
- Compiler can verify more of our intentions
- Clear state transitions and error handling
// Before: Multiple opportunities for bugs
public void processPayment(Payment payment) {
if (payment.getAmount() > 0) { // Possible NullPointerException
if (payment.getMethod() != null) {
if (payment.getMethod().equals("CREDIT")) { // Magic string
// Process credit payment
} else if (payment.getMethod().equals("DEBIT")) {
// Process debit payment
}
}
}
}
// After: Explicit, safe handling
public sealed interface PaymentResult {
record Success(BigDecimal amount) implements PaymentResult {}
record Failure(String reason) implements PaymentResult {}
}
public PaymentResult processPayment(Payment payment) {
return switch (payment.method()) {
case CREDIT -> processCreditPayment(payment);
case DEBIT -> processDebitPayment(payment);
};
}
Business Impact of Reduced Cognitive Load
- Faster Development Cycles
- Less time spent understanding code
- Quicker code reviews
- Reduced debugging time
- Higher Quality Code
- Fewer bugs making it to production
- More consistent implementations
- Better test coverage due to clearer structures
- Improved Team Dynamics
- More confident developers
- Better knowledge sharing
- Reduced friction in code reviews
Ultimately, by consciously managing the number of chunks developers need to hold in memory, we create code that’s not just easier to write—it’s easier to understand, maintain, and evolve. This alignment with our cognitive limitations leads to measurable improvements in development efficiency and code quality.
Navigating the Pitfalls of Modern Java Features
While Java’s modern features can significantly reduce cognitive load, their misuse can actually increase it. Here are key considerations when modernizing your codebase:
Records: Not for Every Class
Records are powerful, but they’re not suitable for all data-holding classes. Watch out for:
// Problematic use of record - business logic doesn't belong here
record User(String name, String email) {
public String getDisplayName() {
return name.split(" ")[0]; // Business logic in a record
}
public boolean isValidEmail() {
return email.contains("@"); // Validation logic belongs elsewhere
}
}
// Better approach - separate data from behavior
record UserData(String name, String email) {}
class UserService {
public String getDisplayName(UserData user) {
return user.name().split(" ")[0];
}
public boolean isValidEmail(UserData user) {
return user.email().contains("@");
}
}
// Or let User be a class that contains both the data and the behavior
Pattern Matching: Clarity vs. Complexity
Pattern matching can make code more readable, but overuse can lead to confusion:
// Overly complex pattern matching - trying too hard to use the feature
String result = switch (obj) {
case String s when s.length() > 10 && s.startsWith("A") ->
s.substring(0, 10);
case String s when s.length() > 5 ->
s.substring(0, 5);
case String s ->
s;
case Integer i when i > 100 ->
i.toString();
case Integer i ->
"0" + i;
default ->
"unknown";
};
// Clearer approach - separate complex conditions
String processString(String s) {
if (s.length() > 10 && s.startsWith("A")) return s.substring(0, 10);
if (s.length() > 5) return s.substring(0, 5);
return s;
}
String result = switch (obj) {
case String s -> processString(s);
case Integer i -> i > 100 ? i.toString() : "0" + i;
default -> "unknown";
};
Var: Readability vs. Brevity
Type inference can reduce clutter, but it can also hide important information:
// Problematic use of var - type information is valuable here
var result = service.processData(input);
var transformed = result.transform(); // What are we working with?
// Better - explicit types for public APIs and complex transformations
ProcessResult result = service.processData(input);
TransformedData transformed = result.transform();
Streams: Readability vs. Conciseness
While streams can make code more declarative, complex chain operations can become harder to understand:
// Overly complex stream with multiline lambda
List<Order> results = orders.stream()
.filter(order -> {
if (order.getStatus() != Status.COMPLETED) return false;
if (order.getTotal() < 100) return false;
return order.getItems().size() > 2;
})
.sorted(Comparator.comparing(Order::getCreatedDate))
.collect(toList());
// Better - extract complex predicate to named methods
List<Order> results = orders.stream()
.filter(Order::isHighValue) // Intent is clear
.filter(Order::isCompleted)
.sorted(Comparator.comparing(Order::getCreatedDate))
.toList();
// The predicate logic in well-named methods
public class Order {
// rest of class omitted for brevity
public boolean isHighValue() {
return total >= 100 && items.size() > 2;
}
public boolean isCompleted() {
return status == Status.COMPLETED;
}
}
Remember:
- New features should reduce, not increase, cognitive load
- Clear code is better than clever code
- Not every new feature needs to be used everywhere
- Consider team experience and maintenance implications
- Maintain consistent patterns across your codebase
By being mindful of these pitfalls, you can ensure that your adoption of modern Java features truly serves its purpose: making code more maintainable and easier to understand.
Conclusion
By understanding how the brain processes information, developers can create applications that are more intuitive, efficient, and less cognitively demanding for users. This approach can improve the overall user experience, increase productivity, and promote greater accessibility for individuals of all cognitive abilities.
Embracing brain-friendly programming will not only benefit individual programmers and users but has the potential to revolutionize the way we interact with technology and shape the future of computing.
The goal isn’t to make code as simple as possible, but to ensure that its complexity serves a purpose. Every line of code should justify its cognitive cost.
We can get help achieving this by upgrading to the latest Java versions and taking advantage of the features they provide.
Are your application running on the latest Java, or at least the latest LTS version? If not, try and find out the reason and if there is a way to upgrade.
Do you know about all the features that your current Java version offers? There are lots of smaller enhancements that don’t get as much publicity as the bigger ones but they can be quite useful.
Immediate Actions
- Audit Your Current Java Version
- Check your application’s current Java version
- List the features you’re missing out on by not being on the latest version or LTS
- Create a migration plan focusing on cognitive-friendly features first
- Start Using Records
- Identify your data classes (classes that are primarily getters/setters)
- Convert one class to a record and measure the reduction in code
- Document the improved readability for your team
- Implement Cognitive Checkpoints
- Add a “cognitive load” section to your code review template
- Questions to ask:
- How many levels of nesting does this code have?
- How many variables need to be tracked at once?
- Could this code be restructured to reduce mental overhead?
Team-Level Changes
- Update Your Style Guide
- Add guidelines for maximum method complexity
- Define standards for when to use newer Java features
- Include examples of before/after refactoring for cognitive load
- Knowledge Sharing
- Schedule regular team sessions to explore new Java features
- Create a “cognitive smells” catalog specific to your codebase
- Share refactoring wins and their impact on maintainability
Long-term Goals
- Measure and Monitor
- Track cognitive metrics in your codebase:
- Cyclomatic complexity
- Number of parameters per method
- Depth of inheritance
- Set improvement targets for these metrics
- Track cognitive metrics in your codebase:
- Continuous Learning
- Follow JEP (JDK Enhancement Proposal) discussions
- Experiment with preview features in non-production code
- Share knowledge about cognitive impact with the broader Java community
Remember: The goal isn’t to make code as simple as possible, but to ensure that its complexity serves a purpose. Every line of code should justify its cognitive cost.
Take Action Now
Audit Your Current Codebase (Week 1)
- Run
java -version
across all your applications and services- Create a spreadsheet documenting:
- Current Java version for each application
- Target upgrade version (latest LTS or latest version)
- Estimated effort
- Potential cognitive benefits (e.g., “Can convert 12 data classes to records”)
- Pick one small service as your pilot upgrade project
Create Your Feature Migration Checklist (Week 1-2)
Map out opportunities to use modern Java features:
// Find candidates like these:
List<String> names = users.stream()
.map(User::getName)
.collect(Collectors.toList());
// Convert to:
List<String> names = users.stream()
.map(User::getName)
.toList();
// Identify data classes like:
public class UserPreference {
private final String userId;
private final Theme theme;
private final Language language;
// Constructor, getters, equals, hashCode...
}
// Convert to:
public record UserPreference(String userId, Theme theme, Language language) {}
Set Up a Learning Environment (Week 2)
- Create a sandbox project with the latest Java version
- Add examples of each modern feature you plan to adopt
- Include before/after comparisons for your team
- Document cognitive load improvements for each change
Measure Current Cognitive Load (Week 2-3)
Pick three complex classes in your codebase and:
- Count the number of fields team members need to track
- Measure cyclomatic complexity using tools like SonarQube
- Document the number of implicit assumptions in the code
- Create a “cognitive heat map” highlighting areas needing attention
Implement Changes Incrementally (Week 3-4)
Start with high-impact, low-risk changes:
- Convert simple data classes to records
- Replace Collectors.toList() with toList()
- Enhance error handling with pattern matching
- Update switch statements to use the new syntax
Document and Share Results (Week 4+)
Create a “Modern Java Patterns” guide for your team:
- Before/after examples from your own codebase
- Measured improvements in:
- Lines of code
- Cyclomatic complexity
- Code review time
- Bug density
- Guidelines for when to use each feature
- Common pitfalls to avoid
Create a Feature Adoption Schedule
Week-by-week plan for introducing modern features:
- Week 1: Records and compact constructors
- Week 2: Enhanced switch expressions
- Week 3: Pattern matching for instanceof
- Week 4: Text blocks and String enhancements
- Week 5: Sealed classes and interfaces
Remember to:
- Start small with contained changes
- Measure before and after metrics
- Document team feedback
- Share learnings in code reviews
- Celebrate cognitive load reductions!
By following these steps, you’ll not only modernize your codebase but also create a more maintainable, understandable, and enjoyable development environment. The key is to make incremental improvements while continuously measuring their impact on cognitive load and team productivity.