A track record across healthcare AI, drone logistics, enterprise CRM, and veteran services. Eight years making sure evidence drives what gets built, how well it works, and whether people come back — when everyone just wants to move faster.
AI-assisted self-diagnosis microsite embedded in BestBuy.com. Built on 25 years of Geek Squad call transcripts. Validated with real users. Later evolved into a triage model across all Best Buy-supported products.
Customers arriving at service with no way to self-triage — leading to misrouted cases, unnecessary friction, and high call volume.
The repair-or-replace decision was already happening in the customer's head — but nobody was giving them the information to make it. Showing device value, trade-in value, and new device pricing within the same flow would have turned a support interaction into a conversion moment, serving both the customer and the business simultaneously. The opportunity was dismissed during design. Unmoderated testing surfaced it as a real user need — research I conducted and used to make the case.
An AI-assisted front end that reduced misrouting and created a feedback loop for service quality.
Designed, tested, and launched a conversational self-diagnosis tool combining machine learning and automated handling to help customers identify device issues without calling Geek Squad. The system drew on 25 years of call transcript data to surface likely diagnoses. 80% of test participants said they would use it in real life. The application later evolved into a triage model deployed across all Best Buy-supported products.
As a customer with a malfunctioning device, I need to understand what's wrong without calling or visiting a store, so I can decide what to do next.
As a customer considering repair, I need to see my device's current value and trade-in offer alongside repair cost, so I can make an informed decision.
Conversational interface trained on Geek Squad call transcript data. Automated issue classification and recommended next action. Integration with device trade-in valuation and current pricing. Handoff path to scheduling or purchase if customer chooses to act.
Live agent escalation, warranty verification, third-party device support, remote diagnostics
Led discovery and strategic roadmap for an 11,000-employee global intranet at a Fortune 100 agriculture and fuel company. The content organization wasn't the problem — access and relevance were.
Employees with experience at comparable Fortune 500 companies described their intranets as daily tools — communication hubs, task lists, collaboration platforms. This one was a static site. People worked around it, not through it. Search was degraded enough that findability had effectively broken down.
Stakeholders believed content organization was the issue. The actual problem was that employees couldn't find content relevant to their role and location — a personalization and access problem, not a taxonomy problem.
A roadmap reframing the intranet as a role-aware tool, not a document repository. One that gave employees utility — not just access to files.
Led discovery and strategic roadmap for CHS Source, the internal intranet for an 11,000-employee Fortune 100 agriculture and energy company with staff across the US, Middle East, South America, and Asia. Conducted interviews, surveys, site traffic analysis, and heuristic evaluation across a wide range of employee types — corporate, field, management, new hires, tenured staff.
The research revealed that employees worked around the intranet rather than through it. Benefits enrollment was the top use case, but half those employees never returned after annual sign-up — many of them field workers without office computers. Search was distrusted. Navigation reflected the org chart, not what employees needed to do. Delivered a phased roadmap reframing the site around employee utility rather than company content, and introduced a continuous improvement model so the internal team could execute independently after the engagement ended.
As a field employee, I need to complete benefits enrollment without visiting an office or relying on a work computer, so I can access what I'm entitled to without barriers.
As a corporate employee, I need to find tools and HR resources quickly without navigating through company news, so I can do my job without friction.
As a manager, I need access to hiring, training, and HR information in one place, so I'm not dependent on email or phone to get answers.
Navigation restructured around tasks rather than org chart. Role and location-based content personalization and tagging. Search UI with filtering and sorting. Bookmark and customization capability for individual employees. Kiosk or tablet-based enrollment path for non-office field locations. Deep linking into HR tasks and benefits workflows. Single sign-on exploration and third-party app decoupling strategy.
Full third-party application decoupling, content migration, new content creation, development execution
Identified critical purchase blockers in DroneUp's drone delivery e-commerce experience and built the research infrastructure to understand what customers were actually hitting — in a company built for logistics, not customers. Bad data and blocked purchases were problems the Fortune 1 Walmart partnership couldn't afford.
Customers within delivery range were hitting a dead end with no explanation. If a hub was down for weather, maintenance, or outside operating hours, the site returned nothing. No feedback, no alternative, no path forward. Customers didn't know why they couldn't complete a purchase.
60–70% of behavioral data was contaminated by internal employee testing. Every metric being used to make product decisions was noise. There was no clean baseline for customer behavior anywhere in the system.
Research infrastructure built from available resources — chat logs, employee testing, piggybacked marketing research — that surfaced real customer behavior for the first time. A filtered data baseline that made product decisions trustworthy and removed a chokehold on sales before the Walmart.com integration could move forward.
The live e-commerce experience had a fundamental problem: customers within delivery range were getting no feedback when a hub was unavailable. Weather, maintenance, operating hours — the site returned nothing. Purchases failed silently. Fixing that feedback loop was the first priority.
Building research infrastructure meant working within significant constraints. No budget was approved for formal usability or field research. I used what was available — customer chat logs, employee usability sessions, and piggybacked questions into marketing research. That work surfaced how far customer expectations had drifted from what the product could actually deliver. When aspirational customer research reached leadership, the reaction was emotional and confrontational. Part of the role became holding that tension in a room full of people who took customer criticism personally.
The other foundational fix was the data itself. 60–70% of behavioral metrics were contaminated by internal employee testing. Partnering with a data scientist to filter out internal traffic established a clean customer baseline — a prerequisite for anything that came after.
As a customer within delivery range, I need to understand why delivery isn't available right now, so I can decide whether to wait or come back later.
As a product team, we need behavioral data that reflects actual customers rather than internal testers, so we can make decisions based on reality.
As a customer expecting Amazon-level responsiveness, I need real-time status and communication during my delivery, so the experience matches what I'm used to.
Hub status feedback surfaced at point of purchase — weather, maintenance, operating hours. Guest checkout path cleared of unnecessary friction. Address verification flow improvements. Internal employee traffic filtered from behavioral metrics. Monthly research reporting cadence established. Usability testing framework built within zero or minimal budget constraints.
Walmart.com integration build, vendor portal, flight operations software, pilot-facing tooling
After a restructuring unknowingly eliminated the entire product leadership for the vendor portal, no one left in the company understood what it was or what was at stake if it didn't ship. The platform wasn't an internal ops tool — it was the entire client relationship layer for every enterprise partner DroneUp would ever take on.
The institutional knowledge for the vendor portal walked out the door with three people. Leadership's solution was to give enterprise clients access to HubOps — a tool built for FAA-certified pilots, not Walmart store managers or restaurant operators. There was no client-facing infrastructure, no champion, and no one who understood why it mattered.
Leadership saw an internal ops problem. What it actually was was a client infrastructure problem. Every enterprise partner — Walmart, DoorDash, Instacart, Olo, Starbucks, Chick-fil-A — needed their own visibility into live orders, margins, and location performance at the right permission level. Without the portal, every new client relationship was a manual operation with no path to scale.
A client-facing platform that made DroneUp sellable at enterprise scale. Pricing, ordering, mission visibility, delivery margins, and multi-location controls surfaced for the right person at the right permission level — and a path toward the 300–500 daily missions per hub that the Walmart partnership required.
I had been iterating wireframes and working closely with the vision lead long enough to understand what the portal needed to be. When the restructuring happened I was the only person left who could explain it. Leadership's instinct was to give enterprise clients access to HubOps — no one with decision-making authority understood the difference between an internal ops tool and a client relationship platform. I built the case and presented it repeatedly to tech leadership, C-suite, marketing, and operations until budget and resources were assigned.
Enterprise partners had no way to manage their own delivery operations without going through DroneUp manually. Every new client relationship required internal mediation with no path to self-service or scale.
As a Walmart store manager, I need live order and delivery visibility for my location so I can manage customer expectations without contacting DroneUp directly.
As a regional director, I need cross-location visibility with appropriate permission controls so I can monitor performance at scale.
As an integration partner, I need API pathways into my existing platform so drone delivery becomes a native option rather than a separate workflow.
As a merchant, I need to manage my own pricing, products, and delivery margins so I control my own business outcomes.
Role-based permissions supporting store, regional, and enterprise access. Live order and mission status visibility per location. Margin and revenue reporting per merchant. Product and pricing management per vendor. Multi-location dashboard. API integration pathways for merchant platforms.
HubOps internal operations, pilot-facing tooling, FAA compliance workflows, consumer-facing e-commerce
A 20-year-old CRM was failing across a global sales organization. Every prior agency had recommended more features. The actual problem was elsewhere.
EY's CRM adoption had collapsed across a global sales team. The platform had accumulated two decades of technical debt and had never recovered a meaningful user base. Leadership framed it as a product problem — the tool didn't do enough, or didn't do the right things. Multiple agencies had cycled through the engagement before Robots and Pencils arrived. None of them produced findings that changed the organization's direction.
The adoption barrier wasn't the feature set. It was data quality — salespeople didn't trust what was in the system, so they didn't use it, which made the data worse, which deepened distrust. The loop was self-reinforcing and invisible to stakeholders who were looking at the tool rather than the behavior around it. Underneath that was a deeper structural issue: the organization lacked a sales culture that would sustain CRM use regardless of what the product did. Feature investment couldn't fix either problem.
The recommendation was a pivot — away from feature development and toward sales culture cultivation as the precondition for any product investment. The research package, built from 40+ interview hours and over 1,200 data points across teams, gave leadership a defensible basis for reorienting the roadmap. Rather than delivering a longer backlog, the engagement delivered a reframe: the product wasn't broken enough to fix, and fixing it first would waste the budget.
EY engaged Robots and Pencils to diagnose a CRM adoption crisis that had persisted across two decades and multiple prior engagements. The platform served a global sales organization and had never achieved the adoption rates that would justify the investment behind it. Leadership's working assumption was that missing features were responsible — that if the tool did more, people would use it more.
The discovery process ran 40+ interview hours across stakeholders and users in multiple regions, generating over 1,200 discrete data points. What emerged wasn't a feature gap. It was a data quality problem compounded by a cultural one. Salespeople didn't enter data because they didn't trust what was already there. That distrust was rational — the data had degraded over time precisely because no one used the system reliably. The loop had no exit that a product roadmap could provide.
The deeper finding was structural. The organization didn't have the sales management culture — the rituals, accountability structures, and leadership behaviors — that make CRM adoption stick. Feature investment in that environment would have been absorbed without changing outcomes. The recommendation was to treat culture as the upstream dependency and defer product investment until the conditions for adoption existed.
The deliverable was a research synthesis that reoriented the engagement's direction and gave leadership language to explain internally why the next step wasn't a build cycle. That finding — which prior agencies had not surfaced — became the basis for a revised organizational strategy.
CRM adoption had failed across EY's global sales organization. The platform was technically functional but practically unused. Prior agency engagements had not produced findings that changed organizational direction. Leadership attributed the failure to the product; the actual failure was upstream of it.
As a sales leader, I need to understand why adoption has failed so I can stop investing in the wrong solution.
As a salesperson, I need to trust that the data in the system reflects reality before I'll contribute to it.
As an EY executive, I need a research-backed rationale for changing direction that I can defend internally.
Conduct structured discovery across user segments and organizational levels. Synthesize findings into a root cause diagnosis with supporting data. Deliver a strategic recommendation with implementation sequencing. Frame findings in terms of organizational decision-making, not feature prioritization.
Feature specification, UX design, technical architecture, platform evaluation or vendor selection
Surgeons using the da Vinci robotic surgery training platform were routing around it entirely. The question wasn't how to improve the product — it was why a high-stakes professional population had decided the product wasn't worth using.
Intuitive Surgical's da Vinci training platform was failing to retain the surgeons it was built for. Usage was low, and the organization's working assumption was that the product needed more — more content, more features, more polish. A year-long build cycle was on the table, with all validation deferred to the end of it.
Surgeons weren't disengaged from training. They were actively seeking it out — on Facebook groups and YouTube channels they'd built themselves. The platform hadn't failed to provide training content; it had failed to provide it in a form surgeons trusted or found usable under the conditions they actually worked in. The workaround communities were the signal. They indicated a trust and usability gap that more features wouldn't close, and that a year of building without validation would make more expensive to fix.
A two-tier roadmap separating quick wins from strategic investment — resource-efficient tactical improvements that could ship fast and rebuild trust, sequenced ahead of longer-term competitive features. And a direct challenge to the build cycle itself: iterative delivery over a Big Design Up Front approach that deferred all learning to a point where course correction would be costly.
Intuitive Surgical engaged outside strategy help on the da Vinci robotic surgery training platform after adoption among surgical professionals underperformed. The organization's working conclusion was that a significant feature build was the appropriate response — with all validation deferred to the end of a year-long cycle.
Discovery identified the actual problem before any solution work began. Surgeons weren't avoiding training; they were getting it from Facebook groups and YouTube channels they'd built themselves. The gap wasn't content volume. It was usability and trust under the conditions surgeons actually work in.
The deliverable was a two-tier roadmap distinguishing quick wins from longer-term competitive investment, paired with a direct recommendation against the proposed build cycle. Deferring all validation to the end of a year-long build — in a context where course correction is expensive — was the wrong structure for a problem rooted in user trust.
Platform adoption was underperforming among the surgical professional population it was built for. Surgeons were routing around the product to unofficial Facebook and YouTube communities for training content. The organization attributed the failure to missing features and proposed a year-long build cycle with deferred validation.
As a surgical professional, I need training content that works within my actual time constraints and workflow — not a platform I have to work around.
As a product leader, I need a roadmap that distinguishes between what we can fix quickly and what requires sustained investment.
As a clinical stakeholder, I need confidence that we're not spending a year building against unvalidated assumptions.
Conduct discovery with surgical professional users to identify usability and trust gaps. Map workaround behavior to specific platform failure points. Deliver tiered roadmap with implementation sequencing rationale. Provide recommendation on build cycle structure with supporting evidence.
Feature design, content development, clinical training curriculum, engineering specification
A Series A multilingual interpretation platform serving the European Parliament, Interpol, NATO, and the United Nations had no systematic way to know whether its product was working for the people using it.
KUDO had assembled an unusually high-stakes client roster for a startup at its stage. The platform was live and in use across international institutions where interpretation failures carry real consequence. Despite that exposure, the organization had no structured research practice. Product decisions were being made without a reliable mechanism for gathering or acting on user feedback.
The gap wasn't awareness — the team understood that research mattered. The missing piece was operational. No framework existed for running usability work in a context where users were professionals operating under session pressure, across languages and jurisdictions, often in environments the team couldn't directly observe. Standard research methods didn't translate. The research problem was as specialized as the product.
A usability research framework scoped to actual operating conditions — professional interpreters and conference participants working in high-stakes, time-constrained sessions across institutional environments. The deliverable was a repeatable process the team could run forward, not a one-time findings report.
KUDO was a multilingual interpretation platform at Series A, with an enterprise client list that included the European Parliament, Interpol, NATO, and the United Nations. The product sat at an unusual intersection: consumer-grade interface expectations applied to a professional tool operating in high-stakes institutional settings, across languages, in sessions where failure has no graceful recovery.
The engagement focused on building a usability research framework rather than delivering a discrete findings package. At that stage and with that client profile, the more durable need was infrastructure — a repeatable process for gathering feedback from professional interpreters and conference participants in conditions that standard research approaches weren't designed for. Users were operating under session pressure, in multilingual environments, often in institutional settings with access constraints.
The framework defined who to recruit, how to reach them, what to measure, and how to run sessions given those constraints. It gave the product team a mechanism for ongoing learning rather than a snapshot that would age out.
A globally-used IT infrastructure monitoring application had been built by engineers, for engineers. The network administrators who depended on it daily were working around it as much as with it.
Nagios XI was a legacy monitoring platform with significant market presence but an interface that reflected how it was built — by engineers, without sustained attention to the administrators who used it operationally. Usability debt had accumulated over time and the gap between what the tool could do and what users could reliably do with it had widened.
The product's complexity wasn't a feature — it was a barrier wearing the appearance of one. Power users had internalized workarounds that made the tool functional for them, which masked the severity of the problem for anyone outside that group. Accessibility gaps compounded the issue for enterprise environments with compliance requirements. The surface read was that the product needed modernization; the actual problem was that it had never been designed for its primary user in the first place.
A modernized platform scoped to how enterprise network administrators actually work — accessible, legible under operational pressure, and built around the monitoring workflows that matter rather than the full range of what the system could technically expose. The redesign reoriented the product around its user rather than its architecture.
Nagios XI was a globally-used IT infrastructure monitoring platform with an interface that had grown from the engineering environment that produced it rather than from the needs of the administrators who depended on it. The gap between technical capability and operational usability had become significant enough to affect how the product competed in enterprise environments.
The engagement centered on transforming an engineer-built legacy system into a modern, accessible platform for enterprise network administrators. Discovery identified the core problem: the product's complexity read as depth to power users who had learned its patterns, but created real friction for the broader administrative population — and created compliance exposure in enterprise environments with accessibility requirements.
The redesign reoriented the product around its actual user rather than its underlying architecture, prioritizing the monitoring workflows that drive daily operational decisions over the full surface area of what the system could expose.
Nagios XI's interface reflected the engineering environment that produced it rather than the operational needs of the network administrators who used it daily. Usability debt had accumulated over time. Accessibility gaps created compliance exposure in enterprise accounts. The product's complexity was masking adoption friction rather than indicating product depth.
As a network administrator, I need to identify and act on infrastructure issues quickly without having to reconstruct what the system is telling me.
As an enterprise IT leader, I need the platform to meet accessibility requirements so I can deploy it across my organization.
As a product leader, I need the redesign to preserve the monitoring depth existing users rely on while opening the product to a broader administrative audience.
Discovery with enterprise network administrator user segment to identify primary workflow friction points. Accessibility audit against enterprise compliance standards. Interface redesign scoped to primary monitoring workflows. Preservation of advanced functionality for power user segment.
Backend architecture changes, monitoring engine modifications, feature additions beyond scope of redesign brief
A 20-year-old acquired platform was carrying two separate problems: a patient intake process with hidden workflow inefficiencies, and no design system — which meant every developer on the product was solving the same problems independently.
PointClickCare had acquired a long-tenured platform without inheriting a coherent product foundation. The patient intake process had accumulated inefficiencies that weren't visible at the feature level — the workflow appeared functional but broke down in practice. Separately, the platform had no design system, which meant 20+ developers were making independent interface decisions across a surface area that required consistency to be trustworthy in a clinical environment.
The intake inefficiencies weren't a UX problem in the surface sense — they were workflow problems that manifested in the interface. The actual friction was upstream of what users were clicking. On the design system side, the cost of the gap wasn't obvious until it was quantified: independent decisions across 20+ developers, compounded over time, represented significant redundant effort. The $335k annual savings estimate made the business case legible in terms leadership could act on.
A redesigned intake process scoped to the actual workflow rather than the assumed one, and a design system that gave the development team a shared foundation — reducing redundant effort, improving consistency, and making the platform more defensible in clinical procurement evaluations where trust in the interface is a purchasing factor.
PointClickCare had acquired a platform with two decades of history and no design infrastructure to support it going forward. The engagement addressed two separate problems that each required their own diagnosis.
The patient intake redesign started with the workflow rather than the interface. The inefficiencies weren't where they appeared to be — surface-level friction pointed back to upstream process gaps that the interface had been built around rather than built to fix. Reorienting the redesign around the actual workflow rather than the apparent one was the precondition for anything useful.
The design system work addressed a different problem with a clearer business case. Twenty-plus developers working on a platform without shared foundations meant redundant decisions compounding across every sprint. Quantifying that redundancy — an estimated $335k in annual savings — gave leadership a number to weigh against the investment, which is a different conversation than asking for resources based on design principle alone.
An acquired platform carried two compounding problems: a patient intake process with hidden workflow inefficiencies not visible at the feature level, and no design system — leaving 20+ developers making independent interface decisions across a platform where clinical consistency is a trust requirement.
As a clinical staff member, I need the intake process to reflect how the workflow actually runs — not how it was assumed to run when the interface was built.
As a developer, I need shared foundations so I'm not solving the same interface problems independently on every sprint.
As a product leader, I need the design system investment justified in terms the business can evaluate.
Workflow discovery with clinical staff to identify upstream friction points in intake process. Intake redesign scoped to actual workflow rather than interface symptoms. Design system covering primary interface patterns across platform. Business case documentation quantifying redundant effort savings.
Backend system changes, clinical workflow policy, platform feature additions outside intake scope
A healthcare AI product was losing users before they could experience its value. The onboarding assumed a level of self-awareness that most users didn't arrive with.
The product required users to have a degree of psychological readiness before it could help them. That requirement was implicit — nothing in the onboarding named it — but it was real, and users who didn't meet it dropped off before the product had a chance to demonstrate value. The drop-off looked like a retention problem. It was actually a sequencing problem.
The barrier wasn't the product's complexity — it was the entry point. Users were being asked to engage at a depth that presupposed self-awareness they hadn't yet developed. The product was designed for where users needed to end up, not where they were arriving from. No scaffolding existed to bridge that gap.
An adaptive progression framework that met users at whatever self-awareness level they arrived with and moved them through graduated complexity without requiring psychological readiness upfront. Separately, a conversational intelligence system that analyzed behavioral patterns to enable dialogue sophisticated enough to be genuinely useful — rather than generic prompts that users recognized as formulaic and disengaged from.
The engagement was with an early-stage healthcare AI product operating under NDA. The core problem was user drop-off at a point in the experience before the product's value had been demonstrated — a pattern that read as a retention issue but traced back to the product's assumptions about who was arriving and what they were ready for.
The adaptive progression framework addressed the entry point directly. Rather than requiring users to meet a readiness threshold before the product could help them, the framework scaffolded users from wherever they arrived through increasing complexity at a pace the product could assess and adjust. Psychological readiness became an outcome rather than a prerequisite.
The conversational intelligence work addressed a related problem: AI therapeutic dialogue that users recognized as generic disengaged them faster than no dialogue at all. The system analyzed behavioral signals — tone, response patterns, psychological framework indicators — to generate contextually appropriate responses rather than prompt-matched ones.
Users were dropping off before experiencing product value. The onboarding assumed psychological readiness that most users didn't arrive with. Conversational AI responses were generic enough that users recognized and disengaged from them.
As a user with low initial self-awareness, I need the product to meet me where I am rather than where it assumes I should be.
As a product leader, I need the onboarding to work across the full range of users we're acquiring — not just the ones who arrive ready.
As a clinical stakeholder, I need the conversational system to reflect genuine understanding of user state rather than pattern-matched responses.
Define user self-awareness spectrum and map entry points across it. Design progression framework with adaptive complexity scaling. Develop behavioral signal taxonomy for conversational intelligence system. Establish value realization milestone for retention measurement purposes.
Clinical diagnosis or treatment recommendation, backend model training, content outside defined progression framework scope
Veterans navigating VA benefits claims had access to information but no system that learned from outcomes. Every claim started from zero regardless of what had worked before.
The veterans benefits claims process is complex, high-stakes, and opaque. Veterans were navigating it without reliable guidance on what strategies actually produced favorable outcomes — not because the information didn't exist, but because no system was capturing or applying it. Each claim was effectively isolated from every claim that had come before it.
The problem wasn't information access. It was that outcomes weren't feeding back into recommendations. Successful claim strategies existed — they just lived in the heads of individual advocates and attorneys rather than in any system that could generalize from them. The gap was structural: no feedback loop, no compounding intelligence, no mechanism for the system to get better over time.
A feedback-driven AI recommendation engine where outcome tracking feeds model intelligence on successful VA review strategies — each resolved claim improving the guidance available to the next. The value compounds over time rather than resetting with every new case.
Vetavize was building toward an AI system for veterans benefits claims — a domain where the stakes are high, the process is opaque, and the difference between a successful and unsuccessful claim often comes down to strategy rather than merit.
The core insight driving the architecture was that outcomes weren't being captured in any form that could improve future recommendations. Successful claim strategies existed in the knowledge of experienced advocates but weren't systematically accessible. The recommendation engine was designed to close that loop — tracking outcomes against the strategies that produced them and feeding that intelligence back into guidance for subsequent claims.
The result is a system that compounds rather than resets. Each resolved claim makes the next recommendation more accurate, which is a fundamentally different value proposition than a static information resource.
Veterans navigating VA benefits claims lacked access to guidance informed by what had actually worked. Successful claim strategies existed but weren't captured in any system that could generalize from them. Each claim started without the benefit of prior outcomes.
As a veteran, I need claim strategy guidance that reflects what has actually worked — not generic advice that doesn't account for VA review patterns.
As a benefits advocate, I need a system that gets smarter over time so my guidance improves without requiring me to manually track every outcome.
As a product leader, I need the feedback loop operational early enough that the system is demonstrably improving before we scale.
Define outcome tracking taxonomy across claim types and review strategies. Architect feedback mechanism connecting resolved claims to recommendation model. Establish baseline claim success rate for improvement measurement. Define advocate-facing interface for recommendation delivery and override capture.
Legal representation, VA process automation, claim filing on behalf of users
"Dan's grasp of web development and trends has really allowed us to push the envelope."
"Dan is always thinking outside the box and bringing fresh, creative ideas."
"We had 'Microsoft', and Dan gave us 'Apple'."
"I sing Dan's praises as often and loudly as I can."
"Do yourself a favor — hire Dan, and you can thank me later!"
"Dan knocks it out of the park. He's flat-out terrific."
"Dan has done absolutely the best analysis for our product!"
"90% of where our platform is today is due to Dan's vision."
"I would work with Dan again in a heartbeat."
"Dan's improvements only require 20% effort to implement, but have an 80% impact."
"His research gave us the needed step to move us forward."
"Dan does 'Class A' work."
"We've had countless agencies do research for us, but Dan is the only one who told us 'the why' of what our users are thinking and doing."
"He is a rare combination of strategic thinking, strong craft and a genuine empathy for users."
"Dan has done phenomenal work for our E-Commerce Platform. He's irreplaceable!"
"Dan is the deadly combo of User Experience and Product Management. It's insane."
"In 'professional environments', not everyone is always professional. Dan IS always a professional!"
"No one seems to know where our product needs to go more than Dan."
"Dan's ability to adapt to evolving projects and stay focused on customers' needs sets him apart."
"Dan is insightful and diligent on making sure that the user comes first in everything he touches."
"Dan's insight, direction, sense of urgency, and long-term vision was invaluable."
"Dan has a vast and understanding mindset."
"Dan's grasp of web development and trends has really allowed us to push the envelope."
"Dan is always thinking outside the box and bringing fresh, creative ideas."
"We had 'Microsoft', and Dan gave us 'Apple'."
"I sing Dan's praises as often and loudly as I can."
"Do yourself a favor — hire Dan, and you can thank me later!"
"Dan knocks it out of the park. He's flat-out terrific."
"Dan has done absolutely the best analysis for our product!"
"90% of where our platform is today is due to Dan's vision."
"I would work with Dan again in a heartbeat."
"Dan's improvements only require 20% effort to implement, but have an 80% impact."
"His research gave us the needed step to move us forward."
"Dan does 'Class A' work."
"We've had countless agencies do research for us, but Dan is the only one who told us 'the why' of what our users are thinking and doing."
"He is a rare combination of strategic thinking, strong craft and a genuine empathy for users."
"Dan has done phenomenal work for our E-Commerce Platform. He's irreplaceable!"
"Dan is the deadly combo of User Experience and Product Management. It's insane."
"In 'professional environments', not everyone is always professional. Dan IS always a professional!"
"No one seems to know where our product needs to go more than Dan."
"Dan's ability to adapt to evolving projects and stay focused on customers' needs sets him apart."
"Dan is insightful and diligent on making sure that the user comes first in everything he touches."
"Dan's insight, direction, sense of urgency, and long-term vision was invaluable."
"Dan has a vast and understanding mindset."
The work is deceptively simple: determine what gets built, in what order, and why it matters to the people who will use it. The hard part is that the answers are rarely what the organization assumes they are when the work starts.
My practice is built around that gap — between what teams believe they're solving and what's actually driving the problem. Part diagnosis, part stakeholder navigation, part structured decision-making under pressure. The deliverable is a direction the team can build toward with confidence.