Developing effective agentic AI requires a new research playbook. When systems plan, decide, and act on our behalf, UX moves beyond usability testing into the realm of trust, consent, and accountabili
Ux
Few things are as frustrating to a user as when a site won’t respond. Unfortunately, it’s also an all-too-common scenario. Many websites and apps depend on so many elements that one of any number
Accessibility
Although JavaScript regexes used to be underpowered compared to other modern flavors, numerous improvements in recent years mean that’s no longer true. Steven Levithan evaluates the history and pres
Javascript
Developing effective agentic AI requires a new research playbook. When systems plan, decide, and act on our behalf, UX moves beyond usability testing into the realm of trust, consent, and accountabili
Ux
Agentic AI stands ready to transform customer experience and operational efficiency, necessitating a new strategic approach from leadership. This evolution in artificial intelligence empowers systems to plan, execute, and persist in tasks, moving beyond simple recommendations to proactive action. For UX teams, product managers, and executives, understanding this shift is crucial for unlocking opportunities in innovation, streamlining workflows, and redefining how technology serves people.
It’s easy to confuse Agentic AI with Robotic Process Automation (RPA), which is technology that focuses on rules-based tasks performed on computers. The distinction lies in rigidity versus reasoning. RPA is excellent at following a strict script: if X happens, do Y. It mimics human hands. Agentic AI mimics human reasoning. It does not follow a linear script; it creates one.
Consider a recruiting workflow. An RPA bot can scan a resume and upload it to a database. It performs a repetitive task perfectly. An Agentic system looks at the resume, notices the candidate lists a specific certification, cross-references that with a new client requirement, and decides to draft a personalized outreach email highlighting that match. RPA executes a predefined plan; Agentic AI formulates the plan based on a goal. This autonomy separates agents from the predictive tools we have used for the last decade.
Another example is managing meeting conflicts. A predictive model integrated into your calendar might analyze your meeting schedule and the schedules of your colleagues. It could then suggest potential conflicts, such as two important meetings scheduled at the same time, or a meeting scheduled when a key participant is on vacation. It provides you with information and flags potential issues, but you are responsible for taking action.
An agentic AI, in the same scenario, would go beyond just suggesting conflicts to avoid. Upon identifying a conflict with a key participant, the agent could act by:
This agentic AI understands the goal (resolving the meeting conflict), plans the steps (checking availability, finding alternatives, sending invites), executes those steps, and persists until the conflict is resolved, all with minimal direct user intervention. This demonstrates the “agentic” difference: the system takes proactive steps for the user, rather than just providing information to the user.
Agentic AI systems understand a goal, plan a series of steps to achieve it, execute those steps, and even adapt if things go wrong. Think of it like a proactive digital assistant. The underlying technology often combines large language models (LLMs) for understanding and reasoning, with planning algorithms that break down complex tasks into manageable actions. These agents can interact with various tools, APIs, and even other AI models to accomplish their objectives, and critically, they can maintain a persistent state, meaning they remember previous actions and continue working towards a goal over time. This makes them fundamentally different from typical generative AI, which usually completes a single request and then resets.
We can categorize agent behavior into four distinct modes of autonomy. While these often look like a progression, they function as independent operating modes. A user might trust an agent to act autonomously for scheduling, but keep it in “suggestion mode” for financial transactions.
We derived these levels by adapting industry standards for autonomous vehicles (SAE levels) to digital user experience contexts.
The agent functions as a monitor. It analyzes data streams and flags anomalies or opportunities, but takes zero action.
Differentiation
Unlike the next level, the agent generates no complex plan. It points to a problem.
Example
A DevOps agent notices a server CPU spike and alerts the on-call engineer. It does not know how or attempt to fix it, but it knows something is wrong.
Implications for design and oversight
At this level, design and oversight should prioritize clear, non-intrusive notifications and a well-defined process for users to act on suggestions. The focus is on empowering the user with timely and relevant information without taking control. UX practitioners should focus on making suggestions clear and easy to understand, while product managers need to ensure the system provides value without overwhelming the user.
The agent identifies a goal and generates a multi-step strategy to achieve it. It presents the full plan for human review.
Differentiation
The agent acts as a strategist. It does not execute; it waits for approval on the entire approach.
Example
The same DevOps agent notices the CPU spike, analyzes the logs, and proposes a remediation plan:
The human reviews the logic and clicks “Approve Plan”.
Implications for design and oversight
For agents that plan and propose, design must ensure the proposed plans are easily understandable and that users have intuitive ways to modify or reject them. Oversight is crucial in monitoring the quality of proposals and the agent’s planning logic. UX practitioners should design clear visualizations of the proposed plans, and product managers must establish clear review and approval workflows.
The agent completes all preparation work and places the final action in a staged state. It effectively holds the door open, waiting for a nod.
Differentiation
This differs from “Plan-and-Propose” because the work is already done and staged. It reduces friction. The user confirms the outcome, not the strategy.
Example
A recruiting agent drafts five interview invitations, finds open times on calendars, and creates the calendar events. It presents a “Send All” button. The user provides the final authorization to trigger the external action.
Implications for design and oversight
When agents act with confirmation, the design should provide transparent and concise summaries of the intended action, clearly outlining potential consequences. Oversight needs to verify that the confirmation process is robust and that users are not being asked to blindly approve actions. UX practitioners should design confirmation prompts that are clear and provide all necessary information, and product managers should prioritize a robust audit trail for all confirmed actions.
The agent executes tasks independently within defined boundaries.
Differentiation
The user reviews the history of actions, not the actions themselves.
Example
The recruiting agent sees a conflict, moves the interview to a backup slot, updates the candidate, and notifies the hiring manager. The human only sees a notification: Interview rescheduled to Tuesday.
Implications for design and oversight
For autonomous agents, the design needs to establish clear pre-approved boundaries and provide robust monitoring tools. Oversight requires continuous evaluation of the agent’s performance within these boundaries, a critical need for robust logging, clear override mechanisms, and user-defined kill switches to maintain user control and trust. UX practitioners should focus on designing effective dashboards for monitoring autonomous agent behavior, and product managers must ensure clear governance and ethical guidelines are in place.
Let’s look at a real-world application in HR technology to see these modes in action. Consider an “Interview Coordination Agent” designed to handle the logistics of hiring.
Developing effective agentic AI demands a distinct research approach compared to traditional software or even generative AI. The autonomous nature of AI agents, their ability to make decisions, and their potential for proactive action necessitate specialized methodologies for understanding user expectations, mapping complex agent behaviors, and anticipating potential failures. The following research primer outlines key methods to measure and evaluate these unique aspects of agentic AI.
These interviews uncover users’ preconceived notions about how an AI agent should behave. Instead of simply asking what users want, the focus is on understanding their internal models of the agent’s capabilities and limitations. We should avoid using the word “agent” with participants. It carries sci-fi baggage or is a term too easily confused with a human agent offering support or services. Instead, frame the discussion around “assistants” or “the system.”
We need to uncover where users draw the line between helpful automation and intrusive control.
Similar to traditional user journey mapping, agent journey mapping specifically focuses on the anticipated actions and decision points of the AI agent itself, alongside the user’s interaction. This helps to proactively identify potential pitfalls.
This approach is designed to stress-test the system and observe user reactions when the AI agent fails or deviates from expectations. It’s about understanding trust repair and emotional responses in adverse situations.
By integrating these research methodologies, UX practitioners can move beyond simply making agentic systems usable to making them trusted, controllable, and accountable, fostering a positive and productive relationship between users and their AI agents. Note that these aren’t the only methods relevant to exploring agentic AI effectively. Many other methods exist, but these are most accessible to practitioners in the near term. I’ve previously covered the Wizard of Oz method, a slightly more advanced method of concept testing, which is also a valuable tool for exploring agentic AI concepts.
When researching agentic AI, particularly when simulating misbehavior or errors, ethical considerations are key to take into account. There are many publications focusing on ethical UX research, including an article I wrote for Smashing Magazine, these guidelines from the UX Design Institute, and this page from the Inclusive Design Toolkit.
You’ll need a comprehensive set of key metrics to effectively assess the performance and reliability of agentic AI systems. These metrics provide insights into user trust, system accuracy, and the overall user experience. By tracking these indicators, developers and designers can identify areas for improvement and ensure that AI agents operate safely and efficiently.
1. Intervention Rate
For autonomous agents, we measure success by silence. If an agent executes a task and the user does not intervene or reverse the action within a set window (e.g., 24 hours), we count that as acceptance. We track the Intervention Rate: how often does a human jump in to stop or correct the agent? A high intervention rate signals a misalignment in trust or logic.
2. Frequency of Unintended Actions per 1,000 Tasks
This critical metric quantifies the number of actions performed by the AI agent that were not desired or expected by the user, normalized per 1,000 completed tasks. A low frequency of unintended actions signifies a well-aligned AI that accurately interprets user intent and operates within defined boundaries. This metric is closely tied to the AI’s understanding of context, its ability to disambiguate commands, and the robustness of its safety protocols.
3. Rollback or Undo Rates
This metric tracks how often users need to reverse or undo an action performed by the AI. High rollback rates suggest that the AI is making frequent errors, misinterpreting instructions, or acting in ways that are not aligned with user expectations. Analyzing the reasons behind these rollbacks can provide valuable feedback for improving the AI’s algorithms, understanding of user preferences, and its ability to predict desirable outcomes.
To understand why, you must implement a microsurvey on the undo action. For example, when a user reverses a scheduling change, a simple prompt can ask: “Wrong time? Wrong person? Or did you just want to do it yourself?” Allowing the user to click on the option that best corresponds to their reasoning.
4. Time to Resolution After an Error
This metric measures the duration it takes for a user to correct an error made by the AI or for the AI system itself to recover from an erroneous state. A short time to resolution indicates an efficient and user-friendly error recovery process, which can mitigate user frustration and maintain productivity. This includes the ease of identifying the error, the accessibility of undo or correction mechanisms, and the clarity of error messages provided by the AI.
Collecting these metrics requires instrumenting your system to track Agent Action IDs. Every distinct action the agent takes, such as proposing a schedule or booking a flight, must generate a unique ID that persists in the logs. To measure the Intervention Rate, we do not look for an immediate user reaction. We look for the absence of a counter-action within a defined window. If an Action ID is generated at 9:00 AM and no human user modifies or reverts that specific ID by 9:00 AM the next day, the system logically tags it as Accepted. This allows us to quantify success based on user silence rather than active confirmation.
For Rollback Rates, raw counts are insufficient because they lack context. To capture the underlying reason, you must implement intercept logic on your application’s Undo or Revert functions. When a user reverses an agent-initiated action, trigger a lightweight microsurvey. This can be a simple three-option modal asking the user to categorize the error as factually incorrect, lacking context, or a simple preference to handle the task manually. This combines quantitative telemetry with qualitative insight. It enables engineering teams to distinguish between a broken algorithm and a user preference mismatch.
These metrics, when tracked consistently and analyzed holistically, provide a robust framework for evaluating the performance of agentic AI systems, allowing for continuous improvement in control, consent, and accountability.
As agents become increasingly capable, we face a new risk: Agentic Sludge. Traditional sludge creates friction that makes it hard to cancel a subscription or delete an account. Agentic sludge acts in reverse. It removes friction to a fault, making it too easy for a user to agree to an action that benefits the business rather than their own interests.
Consider an agent assisting with travel booking. Without clear guardrails, the system might prioritize a partner airline or a higher-margin hotel. It presents this choice as the optimal path. The user, trusting the system’s authority, accepts the recommendation without scrutiny. This creates a deceptive pattern where the system optimizes for revenue under the guise of convenience.
Deception may not stem from malicious intent. It often manifests in AI as Imagined Competence. Large Language Models frequently sound authoritative even when incorrect. They present a false booking confirmation or an inaccurate summary with the same confidence as a verified fact. Users may naturally trust this confident tone. This mismatch creates a dangerous gap between system capability and user expectations.
We must design specifically to bridge this gap. If an agent fails to complete a task, the interface must signal that failure clearly. If the system is unsure, it must express uncertainty rather than masking it with polished prose.
The antidote to both sludge and hallucination is provenance. Every autonomous action requires a specific metadata tag explaining the origin of the decision. Users need the ability to inspect the logic chain behind the result.
To achieve this, we must translate primitives into practical answers. In software engineering, primitives refer to the core units of information or actions an agent performs. To the engineer, this looks like an API call or a logic gate. To the user, it must appear as a clear explanation.
The design challenge lies in mapping these technical steps to human-readable rationales. If an agent recommends a specific flight, the user needs to know why. The interface cannot hide behind a generic suggestion. It must expose the underlying primitive: Logic: Cheapest_Direct_Flight or Logic: Partner_Airline_Priority.
Figure 4 illustrates this translation flow. We take the raw system primitive — the actual code logic — and map it to a user-facing string. For instance, a primitive checking a calendar schedule a meeting becomes a clear statement: I’ve proposed a 4 PM meeting.
This level of transparency ensures the agent’s actions appear logical and beneficial. It allows the user to verify that the agent acted in their best interest. By exposing the primitives, we transform a black box into a glass box, ensuring users remain the final authority on their own digital lives.
Building an agentic system requires a new level of psychological and behavioral understanding. It forces us to move beyond conventional usability testing and into the realm of trust, consent, and accountability. The research methods we’ve discussed, from probing mental models to simulating misbehavior and establishing new metrics, provide a necessary foundation. These practices are the essential tools for proactively identifying where an autonomous system might fail and, more importantly, how to repair the user-agent relationship when it does.
The shift to agentic AI is a redefinition of the user-system relationship. We are no longer designing for tools that simply respond to commands; we are designing for partners that act on our behalf. This changes the design imperative from efficiency and ease of use to transparency, predictability, and control.
“
This new reality also elevates the role of the UX researcher. We become the custodians of user trust, working collaboratively with engineers and product managers to define and test the guardrails of an agent’s autonomy. Beyond being researchers, we become advocates for user control, transparency, and the ethical safeguards within the development process. By translating primitives into practical questions and simulating worst-case scenarios, we can build robust systems that are both powerful and safe.
This article has outlined the “what” and “why” of researching agentic AI. It has shown that our traditional toolkits are insufficient and that we must adopt new, forward-looking methodologies. The next article will build upon this foundation, providing the specific design patterns and organizational practices that make an agent’s utility transparent to users, ensuring they can harness the power of agentic AI with confidence and control. The future of UX is about making systems trustworthy.
For additional understanding of agentic AI, you can explore the following resources:
Few things are as frustrating to a user as when a site won’t respond. Unfortunately, it’s also an all-too-common scenario. Many websites and apps depend on so many elements that one of any number
Accessibility
Graceful degradation is a design approach that ensures the basics of a website will still function even if specific individual parts of it stop working. The approach removes single points of failure: just because one thing stops working doesn’t mean the system as a whole fails. A site following this principle fails in pieces instead of all at once, so the most important features remain available when some components encounter an error.
The idea or the concept of single points of failure is well known in the manufacturing sector. It’s one of the most common resilience strategies in manufacturing and supply chain operations. A factory with multiple sources of material can keep working even when one supplier becomes unavailable. However, it’s become increasingly crucial to web development as user expectations around availability and functionality rise.
Data center redundancy is a common example of graceful degradation in web development. By using multiple server components, websites ensure they’ll stay up when one or more servers fail. In a design context, it may look like guaranteeing the lack of support for a given feature in a user’s browser or device doesn’t render an app unusable.
Escalators are a familiar real-world example of the same concept. When they stop working, they can still get people from one floor to the next by acting as stairs. They may not be as functional as they normally are, but they’re not entirely useless.
The BBC News webpage is a good example of graceful degradation in web design. As this screenshot shows, the site prioritizes loading navigation and the text within a news story over images. Consequently, slow speeds or old, incompatible browser plugins may make pictures unavailable, but the site’s core function — sharing the news — is still accessible.
In contrast, the Adobe Express website is an example of what happens without graceful degradation. Instead of making some features unavailable or dropping load times, the entire site is inaccessible on some browsers. Consequently, users have to update or switch software to use the web app, which isn’t great for accessibility.
The graceful degradation approach acts as the opposite of progressive enhancement — an approach in which a designer builds the basics of a website and progressively adds features that are turned on only if a browser is capable of running them. Each layer of features is turned off by default, allowing for one seamless user experience designed to work for everyone.
There is much debate between designers about whether graceful degradation or progressive enhancement is the best way to build site functionality. In reality, though, both are important. Each method has unique pros and cons, so the two can complement each other to provide the most resilience.
Progressive enhancement is a good strategy when building a new site or app because you ensure a functional experience for everyone from the start. However, new standards and issues can emerge in the future, which is where graceful degradation comes in. This approach helps you adjust an existing website to comply with new accessibility standards or resolve a compatibility problem you didn’t notice earlier.
“
Ensuring your site or app remains functional is crucial for accessibility. When core functions become unavailable, the platform is no longer accessible to anyone. On a smaller scale, if features like text-to-speech readers or video closed captioning stop working, users with sight difficulties may be unable to enjoy the site.
Graceful degradation’s impact on accessibility is all the larger when considering varying device capabilities. As the average person spends 3.6 hours each day on their phone, failing to ensure a site supports less powerful mobile browsers will alienate a considerable chunk of your audience. Even if some complex functions may not work on mobile, sacrificing those to keep the bulk of the website available on phones ensures broader accessibility.
Outdated browsers are another common accessibility issue you can address with graceful degradation. Consider this example from Fairleigh Dickinson University about Adobe Flash, which most modern browsers no longer support.
Software still using Flash cannot use the multi-factor authentication feature in question. As a result, users with older programs can’t log in. Graceful degradation may compromise by making some functionality unavailable to Flash-supporting browsers while still allowing general access. That way, people don’t need to upgrade to use the service.
Graceful degradation removes technological barriers to accessibility. In a broader sense, it also keeps your site or app running at all times, even amid unforeseen technical difficulties. While there are many ways you can achieve that, here are some general best practices to follow.
The first step in ensuring graceful degradation is determining what your core functions are. You can only guarantee the availability of mission-critical features once you know what’s essential and what isn’t.
Review your user data to see what your audience interacts with most — these are generally elements worth prioritizing. Anything related to site security, transactions, and readability is also crucial. Infrequently used features or elements like video players and interactive maps are nice to have but okay to sacrifice if you must to ensure mission-critical components remain available.
Once you’ve categorized site functions by criticality, you can ensure redundancy for the most important ones. That may mean replicating elements in a few forms to work on varying browsers or devices. Alternatively, you could provide multiple services to carry out important functions, like supporting alternate payment methods or providing both video and text versions of content.
Remember that redundancy applies to the hardware your platform runs on, too. The Uptime Institute classifies data centers into tiers, which you can use to determine what redundant systems you need. Similarly, make sure you can run your site on multiple servers to avoid a crash should one go down.
Remember that graceful degradation is also about supporting software and hardware of varying capabilities. One of the most important considerations under that umbrella for web design is to accommodate outdated browsers.
While mobile devices don’t support Flash, some older versions of desktop browsers still use it. You can work with both by avoiding Flash — you can often use HTML5 instead — but not requiring users to have a non-Flash-supporting browser. Similarly, you can offer low-bandwidth, simple alternatives to any features that take up considerable processing power to keep things accessible on older systems.
Remember to pay attention to newer software’s security settings, too. Error messages like this one a Microsoft user posted about can appear if a site does not support some browsers’ updated security protocols. Always keep up with updates from popular platforms like Chrome and Safari to meet these standards and avoid such access issues.
Load balancing is another crucial step in graceful degradation. Many cloud services automatically distribute traffic between server resources to prevent overloading. Enabling this also ensures that requests can be processed on a different part of the system if another fails.
Caching is similar. By storing critical data, you build a fallback plan if an external service or application program interface (API) doesn’t work. When the API doesn’t respond, you can load the cached data instead. As a result, caches significantly reduce latency in many cases, but you should be aware that you can’t cache everything. Focus on the most critical functions.
Finally, be sure to test your website for accessibility issues before taking it live. Access it from multiple devices, including various browser versions. See if you can run it on a single server to test its ability to balance loads.
You likely won’t discover all possible errors in testing, but it’s better to catch some than none. Remember to test your site’s functionality before any updates or redesigns, too.
Designers, both big and small, can start their graceful degradation journey by tweaking some settings with their web hosting service. AWS offers guidance for managing failures you can use to build degradation into your site’s architecture. Hosting providers should also allow you to upgrade your storage plan and configure your server settings to provide redundancy and balance loads.
Businesses large enough to run their own data centers should install redundant server capacity and uninterruptible power supplies to keep things running. Smaller organizations can instead rely on their code, using semantic HTML to keep it simple enough for multiple browsers. Programming nonessential things like images and videos to stop when bandwidth is low will also help.
Virtualization systems like Kubernetes are also useful as a way to scale site capacity and help load elements separately from one another to maintain accessibility. Testing tools like BrowserStack, WAVE, and CSS HTML Validator can assist you by revealing if your site has functional issues on some browsers or for certain users.
At its core, web accessibility is about ensuring a platform works as intended for all people. While design features may be the most obvious part of that goal, technical defenses also play a role. A site is only accessible when it works, so you must keep it functional, even when unexpected hiccups occur.
Graceful degradation is not a perfect solution, but it prevents a small issue from becoming a larger one. Following these five steps to implement it on your website or app will ensure that your work in creating an accessible design doesn’t go to waste.
Although JavaScript regexes used to be underpowered compared to other modern flavors, numerous improvements in recent years mean that’s no longer true. Steven Levithan evaluates the history and pres
Javascript
Modern JavaScript regular expressions have come a long way compared to what you might be familiar with. Regexes can be an amazing tool for searching and replacing text, but they have a longstanding reputation (perhaps outdated, as I’ll show) for being difficult to write and understand.
This is especially true in JavaScript-land, where regexes languished for many years, comparatively underpowered compared to their more modern counterparts in PCRE, Perl, .NET, Java, Ruby, C++, and Python. Those days are over.
In this article, I’ll recount the history of improvements to JavaScript regexes (spoiler: ES2018 and ES2024 changed the game), show examples of modern regex features in action, introduce you to a lightweight JavaScript library that makes JavaScript stand alongside or surpass other modern regex flavors, and end with a preview of active proposals that will continue to improve regexes in future versions of JavaScript (with some of them already working in your browser today).
ECMAScript 3, standardized in 1999, introduced Perl-inspired regular expressions to the JavaScript language. Although it got enough things right to make regexes pretty useful (and mostly compatible with other Perl-inspired flavors), there were some big omissions, even then. And while JavaScript waited 10 years for its next standardized version with ES5, other programming languages and regex implementations added useful new features that made their regexes more powerful and readable.
But that was then.
Did you know that nearly every new version of JavaScript has made at least minor improvements to regular expressions?
Let’s take a look at them.
Don’t worry if it’s hard to understand what some of the following features mean — we’ll look more closely at several of the key features afterward.
/[/]/).y (sticky), which made it easier to use regexes in parsers, and u (unicode), which added several significant Unicode-related improvements along with strict errors. It also added the RegExp.prototype.flags getter, support for subclassing RegExp, and the ability to copy a regex while changing its flags.s (dotAll) flag, lookbehind, named capture, and Unicode properties (via \p{...} and \P{...}, which require ES6’s flag u). All of these are extremely useful features, as we’ll see.matchAll, which we’ll also see more of shortly.d (hasIndices), which provides start and end indices for matched substrings.v (unicodeSets) as an upgrade to ES6’s flag u. The v flag adds a set of multicharacter “properties of strings” to \p{...}, multicharacter elements within character classes via \p{...} and \q{...}, nested character classes, set subtraction [A--B] and intersection [A&&B], and different escaping rules within character classes. It also fixed case-insensitive matching for Unicode properties within negated sets [^...].As for whether you can safely use these features in your code today, the answer is yes! The latest of these features, flag v, is supported in Node.js 20 and 2023-era browsers. The rest are supported in 2021-era browsers or earlier.
Each edition from ES2019 to ES2023 also added additional Unicode properties that can be used via \p{...} and \P{...}. And to be a completionist, ES2021 added string method replaceAll — although, when given a regex, the only difference from ES3’s replace is that it throws if not using flag g.
With all of these changes, how do JavaScript regular expressions now stack up against other flavors? There are multiple ways to think about this, but here are a few key aspects:
x (“extended”) flag that allows insignificant whitespace and comments. Additionally, it lacks regex subroutines and subroutine definition groups (from PCRE and Perl), a powerful set of features that enable writing grammatical regexes that build up complex patterns via composition.So, it’s a bit of a mixed bag.
“
The good news is that all of these holes can be filled by a JavaScript library, which we’ll see later in this article.
Let’s look at a few of the more useful modern regex features that you might be less familiar with. You should know in advance that this is a moderately advanced guide. If you’re relatively new to regex, here are some excellent tutorials you might want to start with:
Often, you want to do more than just check whether a regex matches — you want to extract substrings from the match and do something with them in your code. Named capturing groups allow you to do this in a way that makes your regexes and code more readable and self-documenting.
The following example matches a record with two date fields and captures the values:
const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03'; const re = /^Admitted: (?<admitted>\d{4}-\d{2}-\d{2})\nReleased: (?<released>\d{4}-\d{2}-\d{2})$/; const match = record.match(re); console.log(match.groups); /* → { admitted: '2024-01-01', released: '2024-01-03' } */
Don’t worry — although this regex might be challenging to understand, later, we’ll look at a way to make it much more readable. The key things here are that named capturing groups use the syntax (?<name>...), and their results are stored on the groups object of matches.
You can also use named backreferences to rematch whatever a named capturing group matched via \k<name>, and you can use the values within search and replace as follows:
// Change 'FirstName LastName' to 'LastName, FirstName' const name = 'Shaquille Oatmeal'; name.replace(/(?<first>\w+) (?<last>\w+)/, '$<last>, $<first>'); // → 'Oatmeal, Shaquille'
For advanced regexers who want to use named backreferences within a replacement callback function, the groups object is provided as the last argument. Here’s a fancy example:
function fahrenheitToCelsius(str) { const re = /(?<degrees>-?\d+(\.\d+)?)F\b/g; return str.replace(re, (...args) => { const groups = args.at(-1); return Math.round((groups.degrees - 32) * 5/9) + 'C'; }); } fahrenheitToCelsius('98.6F'); // → '37C' fahrenheitToCelsius('May 9 high is 40F and low is 21F'); // → 'May 9 high is 4C and low is -6C'
Lookbehind (introduced in ES2018) is the complement to lookahead, which has always been supported by JavaScript regexes. Lookahead and lookbehind are assertions (similar to ^ for the start of a string or \b for word boundaries) that don’t consume any characters as part of the match. Lookbehinds succeed or fail based on whether their subpattern can be found immediately before the current match position.
For example, the following regex uses a lookbehind (?<=...) to match the word “cat” (only the word “cat”) if it’s preceded by “fat ”:
const re = /(?<=fat )cat/g; 'cat, fat cat, brat cat'.replace(re, 'pigeon'); // → 'cat, fat pigeon, brat cat'
You can also use negative lookbehind — written as (?<!...) — to invert the assertion. That would make the regex match any instance of “cat” that’s not preceded by “fat ”.
const re = /(?<!fat )cat/g; 'cat, fat cat, brat cat'.replace(re, 'pigeon'); // → 'pigeon, fat cat, brat pigeon'
JavaScript’s implementation of lookbehind is one of the very best (matched only by .NET). Whereas other regex flavors have inconsistent and complex rules for when and whether they allow variable-length patterns inside lookbehind, JavaScript allows you to look behind for any subpattern.
matchAll MethodJavaScript’s String.prototype.matchAll was added in ES2020 and makes it easier to operate on regex matches in a loop when you need extended match details. Although other solutions were possible before, matchAll is often easier, and it avoids gotchas, such as the need to guard against infinite loops when looping over the results of regexes that might return zero-length matches.
Since matchAll returns an iterator (rather than an array), it’s easy to use it in a for...of loop.
const re = /(?<char1>\w)(?<char2>\w)/g; for (const match of str.matchAll(re)) { const {char1, char2} = match.groups; // Print each complete match and matched subpatterns console.log(`Matched "${match[0]}" with "${char1}" and "${char2}"`); }
Note: matchAll requires its regexes to use flag g (global). Also, as with other iterators, you can get all of its results as an array using Array.from or array spreading.
const matches = [...str.matchAll(/./g)];
Unicode properties (added in ES2018) give you powerful control over multilingual text, using the syntax \p{...} and its negated version \P{...}. There are hundreds of different properties you can match, which cover a wide variety of Unicode categories, scripts, script extensions, and binary properties.
Note: For more details, check out the documentation on MDN.
Unicode properties require using the flag u (unicode) or v (unicodeSets).
vFlag v (unicodeSets) was added in ES2024 and is an upgrade to flag u — you can’t use both at the same time. It’s a best practice to always use one of these flags to avoid silently introducing bugs via the default Unicode-unaware mode. The decision on which to use is fairly straightforward. If you’re okay with only supporting environments with flag v (Node.js 20 and 2023-era browsers), then use flag v; otherwise, use flag u.
Flag v adds support for several new regex features, with the coolest probably being set subtraction and intersection. This allows using A--B (within character classes) to match strings in A but not in B or using A&&B to match strings in both A and B. For example:
// Matches all Greek symbols except the letter 'π' /[\p{Script_Extensions=Greek}--π]/v // Matches only Greek letters /[\p{Script_Extensions=Greek}&&\p{Letter}]/v
For more details about flag v, including its other new features, check out this explainer from the Google Chrome team.
Emoji are 🤩🔥😎👌, but how emoji get encoded in text is complicated. If you’re trying to match them with a regex, it’s important to be aware that a single emoji can be composed of one or many individual Unicode code points. Many people (and libraries!) who roll their own emoji regexes miss this point (or implement it poorly) and end up with bugs.
The following details for the emoji “👩🏻🏫” (Woman Teacher: Light Skin Tone) show just how complicated emoji can be:
// Code unit length '👩🏻🏫'.length; // → 7 // Each astral code point (above \uFFFF) is divided into high and low surrogates // Code point length [...'👩🏻🏫'].length; // → 4 // These four code points are: \u{1F469} \u{1F3FB} \u{200D} \u{1F3EB} // \u{1F469} combined with \u{1F3FB} is '👩🏻' // \u{200D} is a Zero-Width Joiner // \u{1F3EB} is '🏫' // Grapheme cluster length (user-perceived characters) [...new Intl.Segmenter().segment('👩🏻🏫')].length; // → 1
Fortunately, JavaScript added an easy way to match any individual, complete emoji via \p{RGI_Emoji}. Since this is a fancy “property of strings” that can match more than one code point at a time, it requires ES2024’s flag v.
If you want to match emojis in environments without v support, check out the excellent libraries emoji-regex and emoji-regex-xs.
Despite the improvements to regex features over the years, native JavaScript regexes of sufficient complexity can still be outrageously hard to read and maintain.
Regular Expressions are SO EASY!!!! pic.twitter.com/q4GSpbJRbZ
— Garabato Kid (@garabatokid) July 5, 2019
ES2018’s named capture was a great addition that made regexes more self-documenting, and ES6’s String.raw tag allows you to avoid escaping all your backslashes when using the RegExp constructor. But for the most part, that’s it in terms of readability.
However, there’s a lightweight and high-performance JavaScript library named regex (by yours truly) that makes regexes dramatically more readable. It does this by adding key missing features from Perl-Compatible Regular Expressions (PCRE) and outputting native JavaScript regexes. You can also use it as a Babel plugin, which means that regex calls are transpiled at build time, so you get a better developer experience without users paying any runtime cost.
PCRE is a popular C library used by PHP for its regex support, and it’s available in countless other programming languages and tools.
Let’s briefly look at some of the ways the regex library, which provides a template tag named regex, can help you write complex regexes that are actually understandable and maintainable by mortals. Note that all of the new syntax described below works identically in PCRE.
By default, regex allows you to freely add whitespace and line comments (starting with #) to your regexes for readability.
import {regex} from 'regex'; const date = regex` # Match a date in YYYY-MM-DD format (?<year> \d{4}) - # Year part (?<month> \d{2}) - # Month part (?<day> \d{2}) # Day part `;
This is equivalent to using PCRE’s xx flag.
Subroutines are written as \g<name> (where name refers to a named group), and they treat the referenced group as an independent subpattern that they try to match at the current position. This enables subpattern composition and reuse, which improves readability and maintainability.
For example, the following regex matches an IPv4 address such as “192.168.12.123”:
import {regex} from 'regex'; const ipv4 = regex`\b (?<byte> 25[0-5] | 2[0-4]\d | 1\d\d | [1-9]?\d) # Match the remaining 3 dot-separated bytes (\. \g<byte>){3} \b`;
You can take this even further by defining subpatterns for use by reference only via subroutine definition groups. Here’s an example that improves the regex for admittance records that we saw earlier in this article:
const record = 'Admitted: 2024-01-01\nReleased: 2024-01-03'; const re = regex` ^ Admitted:\ (?<admitted> \g<date>) \n Released:\ (?<released> \g<date>) $ (?(DEFINE) (?<date> \g<year>-\g<month>-\g<day>) (?<year> \d{4}) (?<month> \d{2}) (?<day> \d{2}) ) `; const match = record.match(re); console.log(match.groups); /* → { admitted: '2024-01-01', released: '2024-01-03' } */
regex includes the v flag by default, so you never forget to turn it on. And in environments without native v, it automatically switches to flag u while applying v’s escaping rules, so your regexes are forward and backward-compatible.
It also implicitly enables the emulated flags x (insignificant whitespace and comments) and n (“named capture only” mode) by default, so you don’t have to continually opt into their superior modes. And since it’s a raw string template tag, you don’t have to escape your backslashes \\\\ like with the RegExp constructor.
Atomic groups and possessive quantifiers are another powerful set of features added by the regex library. Although they’re primarily about performance and resilience against catastrophic backtracking (also known as ReDoS or “regular expression denial of service,” a serious issue where certain regexes can take forever when searching particular, not-quite-matching strings), they can also help with readability by allowing you to write simpler patterns.
Note: You can learn more in the regex documentation.
There are a variety of active proposals for improving regexes in JavaScript. Below, we’ll look at the three that are well on their way to being included in future editions of the language.
This is a Stage 3 (nearly finalized) proposal. Even better is that, as of recently, it works in all major browsers.
When named capturing was first introduced, it required that all (?<name>...) captures use unique names. However, there are cases when you have multiple alternate paths through a regex, and it would simplify your code to reuse the same group names in each alternative.
For example:
/(?<year>\d{4})-\d\d|\d\d-(?<year>\d{4})/
This proposal enables exactly this, preventing a “duplicate capture group name” error with this example. Note that names must still be unique within each alternative path.
This is another Stage 3 proposal. It’s already supported in Chrome/Edge 125 and Opera 111, and it’s coming soon for Firefox. No word yet on Safari.
Pattern modifiers use (?ims:...), (?-ims:...), or (?im-s:...) to turn the flags i, m, and s on or off for only certain parts of a regex.
For example:
/hello-(?i:world)/ // Matches 'hello-WORLD' but not 'HELLO-WORLD'
RegExp.escapeThis proposal recently reached Stage 3 and has been a long time coming. It isn’t yet supported in any major browsers. The proposal does what it says on the tin, providing the function RegExp.escape(str), which returns the string with all regex special characters escaped so you can match them literally.
If you need this functionality today, the most widely-used package (with more than 500 million monthly npm downloads) is escape-string-regexp, an ultra-lightweight, single-purpose utility that does minimal escaping. That’s great for most cases, but if you need assurance that your escaped string can safely be used at any arbitrary position within a regex, escape-string-regexp recommends the regex library that we’ve already looked at in this article. The regex library uses interpolation to escape embedded strings in a context-aware way.
So there you have it: the past, present, and future of JavaScript regular expressions.
If you want to journey even deeper into the lands of regex, check out Awesome Regex for a list of the best regex testers, tutorials, libraries, and other resources. And for a fun regex crossword puzzle, try your hand at regexle.
May your parsing be prosperous and your regexes be readable.
Nature lovers may be able to find a soft spot in their hearts (and devices) for this wintry blue delight. A dandelion froze near the end of winter. If it holds on long enough to thaw out, there might
FreebiesNature lovers may be able to find a soft spot in their hearts (and devices) for this wintry blue delight. A dandelion froze near the end of winter. If it holds on long enough to thaw out, there might be a chance for a revival; but will it make it?
Whether you look at it as a portrait that is locked in time or one that reminds you to never give up, why not download this onto your device to ponder at while you decide?
This wallpaper is courtesy of Rishabh Agarwal, an avid photographer from India. He has a website dedicated to his love of photography at Rish Photography [http://rishabhagarwal.com]. If you are interested in his photographs, please contact him at his website.
If you would like to see your own beautiful artwork or photographs turned into wallpapers and shared amongst our readers like what we are doing here, drop us a line and we’ll see what we can do.
The post Freebie Release: Wintry Blue Wallpaper appeared first on Hongkiat.
After a rough day at the office, there is solace to be found in a quiet night’s drive on a deserted bridge. Perhaps it’s due to the serenity afforded by the enveloping night, or a calming effect o
FreebiesAfter a rough day at the office, there is solace to be found in a quiet night’s drive on a deserted bridge. Perhaps it’s due to the serenity afforded by the enveloping night, or a calming effect of the waters below. A soothing wallpaper like Dark Reflections may provide a fraction of the same solace.
Get a copy of this wallpaper that celebrates a fine combination of the natural element of water, and man-made architectural marvels in perfect symmetry.
Recommended Reading: More Wallpapers!
Dubai based amateur photographer Chiragh Bhatia has been pursuing photography as a hobby for the past 7 years. A self-learned photographer, he has been sharing tips and techniques on the Internet to produce top-quality work. He applies his background in architecture into his work.
If you would like to see your own beautiful artwork or photographs turned into wallpapers and shared amongst our readers like what we are doing here, drop us a line!
The post Freebie Release: Dark Reflections Wallpaper appeared first on Hongkiat.
Creating WordPress Themes from scratch can be challenging. After completing this task multiple times, you might start seeking a more straightforward approach. I’ve discovered that building on a basi
FreebiesCreating WordPress Themes from scratch can be challenging. After completing this task multiple times, you might start seeking a more straightforward approach. I’ve discovered that building on a basic template can significantly speed up the project timeline and reduce stress.
Therefore, I’ve designed a unique WordPress template called “Bare Responsive”, available for download below. The design is mobile-friendly and responsive to various screen sizes. It includes all the standard WordPress template files, and you have complete freedom to modify them as you wish.
My goal is for this template to serve as a foundation for WordPress development, offering a better starting point than a blank slate.
Along with the template files, I’ve also provided some sample data (also available for download below) that you can import and use to test the design.
In the following article, I’ll discuss some of the WordPress features and how you can leverage them in your themes.
Within the header.php file, I’ve added numerous extra metadata and third-party scripts. It’s advisable to modify the author meta tag to reflect your name or your website’s name.
I’ve also incorporated an external stylesheet link to the Google web font Quando, which I utilize for the header text.

You might observe that I’ve employed a custom navigation setup within the WordPress Themes. There’s no strict need to modify the PHP code. However, it’s beneficial to review the parameters for wp_nav_menu() to determine any desired alterations.
What you should consider is creating a new menu within the WP Admin under Appearance > Menus. Subsequently, you can link this new menu to the “header-menu” located in the template file.

This approach allows you to integrate custom links, pages, and even sub-pages into the top navigation without needing any coding.
One of the most intriguing sections of code to customize is within the functions.php file. It contains all the default theme properties, encompassing navigation menus and widgetized sidebars.
I’ve configured two separate, widgetized sidebars. By default, there’s no need to add anything to them, as the template displays non-widgetized data. However, it’s straightforward to locate these sidebars under Appearance > Widgets.

The main sidebar is positioned to the right for all standard layout styles. As the screen width decreases, this sidebar becomes hidden and is substituted with a responsive sidebar. This new mobile-friendly sidebar comprises only two elements and appears below the page content.
Having this flexibility is beneficial, as you might opt to populate both sidebars with the same content. Alternatively, you can establish entirely distinct content for each sidebar, which might be more effective.
I’ve also defined several other functions within the theme file.
Initially, I’ve removed the #more hash from the end of blog post links. I’m not fond of this standard WordPress feature, as it seems somewhat intrusive.
Additionally, the archive pages don’t include a “read more” link by default. To address this, I’ve incorporated it into the HTML using a custom WordPress filter.
The “bare-responsive” theme is designed to be straightforward, allowing you to upload the template and begin editing files directly within the WordPress admin panel. While you have the option to work with the files individually, this can be challenging without a WordPress blog to test the modifications.
Emphasizing simplicity, I’ve restricted the theme files to just the essentials. Additionally, all the responsive mobile CSS codes are consolidated in the style.css stylesheet.

You can adjust the template styles as needed to better align with your preferences.
The custom script.js facilitates the mobile responsive dropdown navigation panel. I believe this approach offers an optimal solution for header navigation, resulting in a seamless appearance.
If you wish to modify the CSS styles of the mobile menu, ensure that you maintain the consistency of the IDs and classes with the jQuery script.

I genuinely hope that this “bare-responsive” template serves as an inspiration for budding developers. Navigating WordPress can be daunting, and having a foundational code can be immensely beneficial.
I’m open to addressing queries and welcoming feedback, recognizing that no template is flawless.
Collaborating with fellow developers is a valuable way to enhance your skills and identify common pitfalls. So, dive in and start coding!
Download and import this XML file into your WordPress to give it some dummy content.
This whole project is purposefully released as open source under the MIT license which means you can edit and distribute unlimited copies for any project as long as you do not claim it your own, or re-sell it.
The post Freebie Release: “Bare Responsive” – A blank and responsive WordPress Theme appeared first on Hongkiat.
Until the business card finds a better, faster, more convenient replacement, it serves as the most secure connection one can make with another in the offline world of business. Putting all your contac
FreebiesUntil the business card finds a better, faster, more convenient replacement, it serves as the most secure connection one can make with another in the offline world of business. Putting all your contact and business information into one handy 3.5 by 2-inch piece of paper, is the best reminder you can leave with your potential and existing clients.
There are plenty of things one must look into when designing a business card, but if budget is a big constraint for your business or the new startup you are working on, these ten business card templates may be the break you need. Created by Meng Loong of Free-Business-Card-Templates.com, these exclusive business card templates are available in PSD format for hongkiat.com readers to download and use.
Businesscard template #1 [ Preview – Front – Back ] [ Download ]

Businesscard template #2 [ Preview – Front – Back ] [ Download ]

Businesscard template #3 [ Preview – Front – Back ] [ Download ]

Businesscard template #4 [ Preview – Front – Back ] [ Download ]

Businesscard template #5 [ Preview – Front – Back ] [ Download ]

Businesscard template #6 [ Preview – Front – Back ] [ Download ]

Businesscard template #7 [ Preview – Front – Back ] [ Download ]

Businesscard template #8 [ Preview – Front – Back ] [ Download ]

Businesscard template #9 [ Preview – Front – Back ] [ Download ]

Businesscard template #10 [ Preview – Front – Back ] [ Download ]

We hope you like it and feel free to spread the word!
The post Freebie Release: 10 Business Card Templates (PSD) appeared first on Hongkiat.
Time for another freebie! We recently published a post about the emergence of a new trend called Long Shadow Design. As a follow up to that, this freebie contains a long shadow flat icon set as design
FreebiesTime for another freebie!
We recently published a post about the emergence of a new trend called Long Shadow Design. As a follow up to that, this freebie contains a long shadow flat icon set as designed by one of our readers Simon Rahm.
Simon, a 16-year old student from Austria, emailed us that he was inspired by the post, and started working on his own flat icon set. Here’s what he presented us and has since agreed to publish as a freebie exclusively for hongkiat.com readers:
More flat design related posts:

The icons in this icon set are available for download in PNG and AI. Get it in the sizes you need or go ahead and download the AI file.
Download Long Shadow Flat Icon Set (PNG)
or you can just get the AI file from this link below:
For more of Simon’s work, check out his portfolio here.
The post Freebie Release: Long Shadow Flat Icon Set by Simon Rahm appeared first on Hongkiat.
We featured Simon Rahm’s Long Shadow Flat Icon Set giveaway a while back. Well, Simon is back with another set of icons to give away to our readers. We know that a lot of our readers are fans of Ado
FreebiesWe featured Simon Rahm’s Long Shadow Flat Icon Set giveaway a while back. Well, Simon is back with another set of icons to give away to our readers. We know that a lot of our readers are fans of Adobe Creative Suite (and if you haven’t checked out Adobe Creative Suite Toolbar Shortcut Wallpapers, you’re welcome).
This time, Simon has applied the long shadow design to some of our favorite Adobe Creative Suite Icons, namely:
Here’s a larger preview of the icons.

Please enter your email address below and click the Download Files button. The download link will be sent to you by email.
Success! Now check your email 🙂
Woops. It looks like your email address is not complete. Let’s try again.
The post Adobe Long Shadow Icons appeared first on Hongkiat.
Why Outsource In The First Place? The Earth revolves around the Sun. Hopefully, we can agree on that. Metaphorically though, it certainly revolves around money. That is especially true when it comes to business. Most business decisions are heavily influenced by finances, whom to hire, which […]
BusinessThe Earth revolves around the Sun. Hopefully, we can agree on that.
Metaphorically though, it certainly revolves around money.
That is especially true when it comes to business. Most business decisions are heavily influenced by finances, whom to hire, which market to move into, or how big of crypto loans you may need to make your ideas come to life. And of course, whether the business itself succeeds or fails is measured by market capitalization. So once again — by money.
It comes as no surprise that business owners look for ways to grow their revenues and limit expenses. And that’s where software outsourcing comes into play.
With software outsourcing — that is, when you employ external software development services to handle any of your projects — you can lower the overall costs of software development and save both time and money. While it also gives you access to global talents, streamlines the processes, and lets you focus on your business’s core strengths, cost–reduction is the reason more often cited (70%), as shown by the 2020 global outsourcing survey.
Doubtful? Then let’s compare the overall costs of hiring an in–house team versus via a software development company.
Before you can start the hiring process, you have to set up office space for your future employees. The average cost for office space in the US by square foot is between $8–$23, and since the average size of an office in the US is 1,400 square feet, the average price might fall between $11,200–$32,200.
Of course, the total cost of renting space differs across cities and countries. In the US, New York is definitely the most expensive, beaten only by Hong Kong on the global scale, and with Tokyo and London hot on its heels.
Besides putting the desks together with matching chairs, it’s also important to include access to a private kitchen, lounge area, meeting rooms, and most importantly, parking space. Which of course will generate additional costs, but will keep your employees happy in the long–term.
One way of lowering the office space costs is to consider sharing co-working spaces with different businesses, but this might be risky when dealing with sensitive information on a day–to–day basis.
When you have space, you need to fill it with hardware and software infrastructure, which may include: computers, programs, subscriptions, servers, and so on, depending on your needs. But you also have to pay for general things as well, such as basic utilities and office supplies, which may need regular maintenance, repair, or even replacement after a few years of use. Should you need to allocate resources, for example, for efficient Kubernetes workloads management, you’ll be in for additional costs.
By that point, the office should be ready to welcome new employees onboard. What kind of expenses should you expect?
First, the recruitment. Depending on the process, there might be some costs involved if you’re not doing it on your own — that means hiring recruitment specialists or agencies to help you out in securing the best local developers for your software development project.
In Germany, the average recruitment cost is $5,732, while in the US it’s $4,129, and in the UK — $4,258.
Then come the usual costs that surround the hiring itself. These include the base salary, taxes, insurance, and fringe benefits, from paid sick leave and a retirement plan to access to welfare & recreational facilities, depending on the country.
According to research done by UHY, the international accounting, and consultancy network, the average employment costs on a global scale are now almost 25% of an employee’s salary. The highest costs can be found in Europe, while the lowest are in Canada, Denmark, India, and the US.
Furthermore, due to the impact of the pandemic on the workplace, additional benefits might become a new standard — such as extended remote work opportunities or even child care options. Not mentioning the hand sanitizers and disinfectants which quickly became a necessity used several times every day.
Next on the list is onboarding. It might be surprising, but the average cost of the onboarding process in a small to medium business is $400 per employee. This includes the offer packs, preparing the necessary equipment, and time spent on bureaucracy and showing the new employee the ropes.
Then it’s time for training. Even if you hire experienced professionals in software development, learning the work culture, understanding the ongoing processes, and getting to know fellow workers will take some time. In some cases, it might even take 1 to 2 years for the employee to become fully productive in the new environment, according to Training Industry Quarterly.
And if you want to retain your employees for that long, offering help in their professional growth — encouraging self–development, providing opportunities for mentorships, and providing access to various courses — may make a difference in turnaround. This way, you show that you care about your employees and that there’s room for their careers to advance. All that though might require additional costs, especially in eventual pay raises that come with higher qualifications.
In some situations, money might be slipping through your fingers without you even realizing it.
With most contracts, you don’t only cover the software development costs themselves, but also all the hours spent in–seat. For example, if your company works on a project–to–project basis, there might be stages in the workflow where your senior software developers or software engineers — or any other employees, take your pick — are not utilized to the fullest extent. Due to circumstances, they might not be even able to work on their tasks, waiting for resources or their teammates’ input first.
And if your employee goes on sick leave or simply on vacation, you have to pay for that as well. Of course, these things are important in the long run for keeping that turnaround low, but it’s something to keep in mind.
There also might be trouble when firing your employees. In some countries, like in Poland, the notice period can last up to 3 months, while in Germany — up to 7, depending on the length of the employment. For the employer, that means they can have a person on board that most probably isn’t as motivated as others to do their work well, which in turn might lead to monetary losses.
During the pandemic, many employees were forced to work from home. In the US, that amounted to 71% of people working remotely in sectors where that was possible, which take up 56% of jobs.
It’s hard to say whether the trend will stay for long, but there’s a chance that telecommuting will prevail in certain sectors, mostly in the IT industry, Finance and Insurance, and Management. This could potentially mean lowered costs in office management, but in return, it requires providing the employees with appropriate hardware and software equipment to comfortably work from home, as well as programs, applications, or even intranet for proper task management. To fully utilize them, you might have to pay for subscriptions or even create your own intranet with the help of a software house, following the example of ib vogt.
Some companies even went one step further during the pandemic, deciding to cover the costs of the internet and telephone for their employees. After that, laws were enacted in certain places — like in ten states of the US — that force employers to reimburse employees for remote work expenses, similarly to the Netherlands, with Poland following their example.
So even though there might be some opportunities to save money in certain places, there might be some expenses in others to balance it out.
All of the above doesn’t really apply to outsourcing. You don’t have to care about salaries, benefits, workspace culture, or office maintenance. What are you paying for then, exactly?
The only thing you’ll be invoiced for is the work itself, without having to stress over the little things. Most of the time, outsourcing companies offer two varied ways of payment for their software development services: The Fixed Price Model and the Time and Material Model.
The first one, the Fixed Price Model, assumes that payment will be either invoiced in pre-defined milestones or before and after the project, with the payment split into two parts in different percentages. This model is well–suited for software development services with straightforward and easy-to-predict processes, as well as for those whose goals and requirements are clearly stated and not up to change.
Business owners choose the Time and Material Model when they want their project to be scalable and flexible enough to employ changes throughout the development process or when it’s hard to measure the scope. In this scenario, the only thing you pay for is the time of the development team, as well as for any additional resources required.
Both models ensure that you pay for the output, not the time spent in-seat, and only for that.
There are no universal prices set around the globe for hourly rates of software developers just like there are no set prices for specific projects. Each company sets its own rates, and even infamous, cheap outsourcing destinations, such as India, you can find surprisingly high rates. Just like you can find relatively cheap software development in countries known for being expensive.
Why do the rates differ so much? First and foremost, each country has a different economic situation with varied wages and living standards. Thus, the prices constantly fluctuate, in response to what’s happening on the macro and microscale.
Taking that into account, it’s easy to see how outsourcing can help you save money if you’re smart about it. By analyzing your financial situation and comparing available options, you can gain a lot of value by spending less.
But how much is that, exactly? It really depends on too many factors to clearly measure once and for all, so let’s look at the average rates across the countries and compare them.
The table above explicitly shows the astounding difference between countries in how much software developers can actually make. But how accurate is that data? It depends on how many people have shared their information about their careers, what industry they worked in, and other factors. It’s also good to keep in mind the effects of the pandemic on the labor market. Currently, with the world struggling to recover from the pandemic, the wages in the majority of countries — like in the US, Italy, Canada or France — have actually risen. And with certain sectors doing better than others, those fighting for great employees might have a higher competition than usual.
To cross-check this information with how the firms price their work, let’s look at Clutch.
As of today, you can find 18,897 firms listed under “Top Custom Software Development Companies”, and only 7,657 post their development rates. Thanks to the easy filtered search, we can quickly check the average hourly rates entered by such companies.
To look at those numbers from a different perspective, let’s see how that translates into percentages.
This data shows some correlation with the average rates in the IT services posted above, but it’s still noticeably higher. It does follow the general trends of the outsourcing market: Western Europe, North America, along with Scandinavian countries, fall into the higher pricing ranges when compared to South America, Asia, and Eastern European countries. But even in a famously cheap outsourcing destination such as India, you can find surprisingly high offshore software development rates. How can we explain that?
We can assume that every software development company on Clutch caters to clients from well–developed countries, and thus they can raise their rates accordingly; so that their employees are appropriately compensated and yet stay attractive on the outsourcing market. Many of them present themselves as global software development companies, for whom the time zones and cultural differences are non-existent, providing their IT services to clients from all over the world.
So when you’re hiring offshore developers by average hourly rate, you can expect more than 75% of them to cost less than $99 per hour.
Let’s assume you’re more inclined towards the Fixed–Price Model or you would like to know the final cost. Clutch once again will help us out in our estimations, where you can filter through the software development companies by budget. Let’s see what they set as a minimum price per project.
Let’s look at these numbers again, but in percentages.
The majority of projects, 33% of them, start with a $5,000 label, while there’s also a big chance of finding options at even cheaper price ranges. And there’s a 83% chance of getting your price estimated below $25,000.
If you want to hire offshore developers per hour, you can expect 57% of them to cost below $50 and 75% cost below 99$.
If you want to outsource a whole project to a software development company, you have a 61% chance of having to pay less than $10,000, and 83% of paying less than $25,000.
Taking the above into consideration, pricing at MPC stays at an average level while still providing sustainable quality based on the years of experience in delivering in–house products. And while we’re striving to reach challenging goals, learning how to optimize our work and how we can improve even more our services, we keep in mind our client’s financial restrictions and we respect their decisions influenced by it.
Average rate at MPC stays at $50. It varies depending on the skillset, position, and seniority level. A full stack senior developer with 6 years of experience under his belt will surely be a larger cost than a mid QA engineer. And in this market like any other, the supply and demand will come into play as well. Recently, we’ve noticed a big increase in popularity in JavaScript based technologies, therefore salaries of those specialists increased significantly, resulting in higher hourly rates.
At the same time, the average Fixed–Price project doesn’t require any different budget than what the market usually does. Vast majority of new projects we’re facing are MVP versions of applications that fill in some niche. The amount between $15,000 and $30,000 allows us to provide value to the end user, challenge the idea, and in many cases, even monetize to expand the solution later on.
Software development can be done in two ways: in–house, that is with your own team, or by partnering with a software development company that specializes in a niche relevant to your business. Both options come with costs in different areas, so there is no right choice for everyone. It all depends on the circumstances. That’s why each business owner should analyze their own situation first and deduct which software development model is more beneficial and convenient.