In Brief
I'm Gianmarco Rivas. I run LoopTree AI out of Santo Domingo. I've shipped over 80 AI agents and over 200 workflow automations. The bulk of that work has been for international clients and individuals. Sectors I've worked in include pharmaceutical laboratories, oil and gas, financial services and crypto, media, furniture retail, auto detailing, pastry businesses, and laundry services. This post focuses on what I've actually seen on the ground in the DR: the three conversations I keep having with prospects, how a typical project runs, the mistakes I've made, six recommendations I'd stand behind, and what I'd tell any Dominican business considering AI right now. No hype. Just what's working.
My experience implementing AI in the DR
Most founders I meet assume AI for their business means a ChatGPT-style chatbot on their website. In my experience, it almost never does. The real conversation — in the DR, and honestly in most of the markets I work across Latin America — is about WhatsApp. That's where the customers are. That's where the backlog of unanswered messages lives. That's where business owners lose sleep on Sunday nights, scrolling through 200 unread threads from leads they won't get to until Tuesday. If you want to understand AI implementation on the ground, you start there.
A few patterns show up almost every time I start a project. First, WhatsApp isn't a channel — it's the channel. A restaurant, a real-estate agent, and a cardiology clinic all run customer communication through it. Second, teams are small. The companies calling me are rarely over 30 people, and most are under 10, which means one person is doing the work of three and can feel every hour AI gives back. Third, the operational stack is already digital but disconnected: Google Sheets, HubSpot or a basic CRM, WhatsApp Business, email. The data exists. Nothing talks to anything else.
The thing I wish more companies understood before they start is that AI implementation is a learning curve, not a switch you flip. The first weeks are always about watching how the team actually works, seeing where the agent gets it right, and catching the edges where it gets it wrong. The impact on a company isn't the AI itself — it's what gets freed up: hours per week a founder used to spend on triage, response times that drop from hours to seconds, a sales pipeline that stops losing leads overnight. That's usually what surprises people. The tech is the smaller half of the story.
The three conversations I keep having
When a Dominican business books a call with me, it is almost always one of three conversations. The order of frequency has barely changed in the past year. Here is what each looks like in practice, with real numbers from implementations I have actually shipped.
A local laundry automated customer support and order tracking via WhatsApp
6,500+
messages handled
1,100+
orders managed
A furniture company stopped losing ad-generated leads with an AI sales agent
847
leads in 5.5 weeks
13s
median response
Less frequent but increasingly common. Teams want automatic lead scoring in their CRM, email triage that routes urgent requests to the right person, report generation from a Google Sheet that used to cost someone an hour every Monday, or multi-step workflows that were burning a dozen copy-paste operations per ticket. The pattern is always the same: connect the tools you already use, then layer intelligence on top. I refuse to ask a client to migrate off HubSpot or Sheets just to adopt AI. That migration usually burns more value than the AI adds.
The sectors my clients come from
Not in the abstract. This is where my actual portfolio in the Dominican Republic sits. Across these, the shared trait is volume: either high volumes of repetitive customer messages, or high volumes of manual internal work that is eating someone's week.
One of the fastest-growing corners of my pipeline. Labs handle high-volume queries around appointments, results, and compliance, much of it over WhatsApp. AI takes the repetitive load. Humans keep the sensitive work.
Less obvious, more lucrative. The wins here are internal: document processing, report generation, multi-step operations workflows. The manual load is enormous, which makes single-project ROI the highest I see.
Account questions, KYC intake, document flows. Regulated enough that you build carefully. Repetitive enough that ROI shows up quickly when you do it right. Crypto teams in particular have been asking for triage-heavy agents.
Editorial ops, content distribution, audience engagement. More often internal workflow automation than customer-facing agents. It reclaims hours the team was spending on coordination and formatting instead of actual reporting.
A classic use case. Heavy ad spend on Meta and Google generates lead volume that salespeople cannot keep up with. An AI agent pre-qualifies every inbound before a human ever touches the lead. See the furniture case study above.
Appointment booking and WhatsApp-driven customer flow. Smaller operations on paper, but the scheduling logic is surprisingly complex. That is exactly where an agent earns its keep.
Pastry businesses, laundry services, and other high-volume operations where the same patterns repeat — either heavy customer message volume or heavy internal workflow weight.
How a typical project runs
Four stages. I am including this because most prospects ask some version of “what would working with you actually look like?”, and because seeing the shape of an AI implementation project in the Dominican Republic before you commit removes most of the uncertainty.
The first call (free, 30 min)
Every project starts with a call. It is free, it is 30 minutes, and I spend most of it asking questions, not pitching. I want to understand how your team handles messages or leads today, where the bottleneck actually sits, and what your day looks like when the workflow works well and when it does not. If AI is not the right answer for you, I will say so. Roughly a third of the calls I take end with me recommending something simpler, like a better inbox setup, a CRM trigger, or a Google Sheet template, because the problem did not need an agent.
The demo, built on your data
If AI makes sense for your case, I build a working demo on your actual data: your product catalog, your past conversations, your tone of voice. You get to try it on your own scenarios, break it, and find the edge cases. This usually takes a few days. The reason I work this way is that generic demos mislead. You cannot tell from a vendor's slide deck whether their agent will actually handle “si están los platos de camarones pa' llevar” the way your customers write it. You need to see it fail and succeed on your real inputs before you spend money.
The build (2 to 6 weeks)
Straightforward projects, like a WhatsApp agent with a basic dashboard, run 2 to 3 weeks. Multi-channel setups, CRM integrations, or anything with compliance requirements run 4 to 6. You get a weekly update with what I built, what I tested, and what is left. No big-reveal launches. I would rather you catch something odd in week two than at go-live. Every workflow gets staged against your real traffic before it goes live publicly.
The handoff (you own it)
Every project ships with a control panel. You see conversations, response times, escalations, and the outcomes the agent is driving. You update the knowledge base yourself without filing a ticket. You train new team members on it in 15 minutes. I stay available after launch for tuning and new workflows, but the goal is that by day 30 you can run the thing without me. I have watched too many Dominican businesses get locked into vendors who made themselves indispensable. I will not do that.
My recommendations, if you are evaluating AI
Six things I would tell any Dominican business owner who is thinking about AI, whether they end up working with me or not. Each one comes from something I either did right or learned the hard way.
Start with one workflow, not ten
Every project I have done that worked started with a single workflow. Every one that stalled, including one of my own early on, tried to automate four things at once. Pick the workflow bleeding the most hours or losing the most leads. Ship that. Expand later. Businesses that treat AI as a platform rollout tend to struggle. The ones that treat it as one problem at a time get results in weeks.
Demand visibility from day one
If you cannot see what the agent is doing, you do not have an AI agent. You have a black box with a subscription fee. Every implementation I ship includes a dashboard: live conversations, response times, escalation rates, and whatever outcome matters for that workflow. Ask your provider to show you the dashboard before you sign anything. If they do not have one, or theirs is a generic panel with no insight into your specific workflow, that is your answer.
Human escalation is non-negotiable
No AI agent should be the last line. Every agent I build has confidence thresholds. When it is not sure, it stops, routes to your team with full context and a suggested reply, and waits. The mistake I see in the wild is agents that answer anything because they were told to. That is how a customer gets a confidently wrong answer at midnight. Escalation is a feature, not a failure.
Build on the stack you already use
Your customers are on WhatsApp. Your sales team lives in HubSpot. Your ops team's lifeblood is a Google Sheet. A good provider integrates with all of that. A bad one asks you to migrate. If someone tells you the agent only works inside their platform and your team needs to switch CRMs or inboxes to use it, walk away. The migration cost usually exceeds whatever AI was supposed to save you.
Bilingual is table stakes in the DR
Your agent needs to handle informal Dominican Spanish, switch to English mid-thread without asking, and understand Anglicisms and code-switching. I test every agent with a list of 30+ real messages pulled from actual Dominican conversations. Most generic providers fail on the first five. If your provider cannot show you bilingual testing with your customer data, assume the agent will embarrass you publicly in week one.
Demand a real test before you commit
Free discovery calls are table stakes. A real demo on your data, before any contract, is where you separate operators from resellers. If a provider wants a signed agreement before they will show you the agent responding to your actual customer patterns, that tells you how much confidence they have in their own work. I do this differently on purpose.
What I would do differently
Four honest lessons from projects that did not go to plan. I include these because the version of this post where I pretend everything has worked perfectly is not useful to anyone. And if you are evaluating AI providers in the Dominican Republic, understanding how someone thinks about their own mistakes tells you more than any case study.
Lesson 1
I shipped without a dashboard because the client “did not need one.”
Three weeks post-launch they could not tell me if the agent was working or not. I could not either. That project stayed in limbo for a month longer than it should have, because we were arguing about anecdotes instead of reading data. Now the dashboard ships on day one, no exceptions. The client almost never “does not need one.” They just have not realized they need one yet.
Lesson 2
I skipped the discovery call and went straight to a demo.
I thought I was saving the client time. I was building the wrong thing twice. Now I do not build anything until I have watched how a real conversation actually happens in the team's current workflow. Sometimes I shadow them for an hour. It is the single highest-ROI time I spend on any project, and it is the thing I almost always want to skip.
Lesson 3
I built a technically good agent for an industry I did not understand.
It was never really used. The gap was simple: I did not know what a “good” interaction looked like in that world, so I could not tell when the agent was being helpful versus being off. Now I spend the first days of every project learning the industry before I write a single line of prompting — shadowing the client's inbox, asking what a great interaction looks like and what a terrible one looks like, paying attention to the details the team assumes everyone knows. That context is what lets me propose features the client did not know to ask for — small additions that add real value on top of what they originally requested. Industry fit beats model quality.
Lesson 4
I underestimated what the tokens would actually cost.
AI is fun. It is easy to get caught up in what the agent can do and forget that every LLM call has a real cost attached to it, in tokens. On a couple of early projects the client's monthly bill ran higher than what I had estimated, and that was on me. Now I sit with the client in the first week and model the expected token volume against their actual traffic — peak hours, seasonal swings, the tasks that genuinely need the more expensive model versus the ones a smaller model handles fine. I always quote a contingency on top of the base estimate, because some tasks have no workaround: they need the expensive model, and that is that. The goal is a workflow that matches the client's infrastructure and their real traffic, and an invoice with no surprises on it.
Common questions
Which industries in the DR benefit most from AI? +
Across my own portfolio in the DR, the fastest ROI has come from pharmaceutical laboratories, oil and gas, financial services and crypto, media, furniture retail, and auto services. The common thread is not the industry itself. It is volume: either high volumes of repetitive customer messages, or high volumes of manual internal work.
Can AI agents handle Dominican Spanish? +
Yes. Modern AI language models understand regional variations of Spanish including Dominican expressions. Agents can be configured to respond naturally and switch between Spanish and English as needed.
How long does implementation take? +
Most implementations go live within 2 to 6 weeks. A straightforward WhatsApp AI agent can be ready in 2-3 weeks. Multi-channel setups with CRM integrations typically take 4-6 weeks.
Do I need a technical team to manage an AI agent? +
No. We build a custom control panel so you can manage settings, update the agent's knowledge, and review performance without writing code. The goal is visibility and control without technical skills.