How to fire a trigger for existing records in Salesforce using Flows

So you’ve just deployed a new trigger or flow and you want to push your org’s existing records through it. This can often be referred to as a “Mass Touch” or “Mass Update”. There are many different ways that this can be achieved, some involving exporting data using third-party tools, but here’s one approach that can be done completely within Salesforce, and with no code!

Create a new “Mass Touch” field

We’ll start the process by adding a new field on the object we want to fire the trigger on. This field will be used to update the record without touching its existing data.

  • Go to Salesforce Setup.
  • Select Object Manager.
  • Locate the object you want to execute a mass touch for and select it.
  • Go to Fields & Relationships.
  • Select New.
  • Data Type: Number.
  • Press Next.
  • Field Label: Mass Touch
  • Length: 1
  • Decimal Places: 0
  • Field Name: MassTouch
  • Help Text: This field should only be used to invoke an update on the record and should not be populated by user entry.
  • Press Next.
  • Only provide visibility to your profile (or the profile of the user completing this process).
  • Press Next.
  • Uncheck Add Field to prevent the field from being added to the page layout.
  • Select Save.

Your object now has a new field that can be updated without impacting existing business data.

Creating a scheduled flow

We now need to create a flow that’s going to look at all the records on the object and update the Mass Touch field just created. We’re going to do this using a Scheduled Flow as this will allow us to set a time for the process to fire. If your object has a lot of existing records it may be preferable to run this process out of hours.

  • Go to Salesforce Setup.
  • Search for Flows in the left navigation and select Flows.
  • Select New Flow.
  • Choose Schedule-Trigger Flow and select Create.
  • Select Set Schedule.
  • Choose a Start Date and Start Time that work for your org.
  • Set Frequency as Once.
  • Select Done.
  • There is no requirement to specify an object in the start element so leave this empty.

The start element in the flow is now configured. Let’s follow that up and add an Update Records element.

  • Press the + icon to open the Add Element view.
  • Scroll down to Data and select Update Records.
  • Label: Update Records
  • API Name: UpdateRecords
  • Description: Set the records' Mass Touch field to a new value.
  • How to Find Records to Update and Set Their Values: Specify conditions to identify records, and set fields individually.
  • Under Update Record of This Object Type, search and select the Object you want to perform the mass touch on.
  • Under Filter Object Records, you can add conditions to only mass touch a selection of records, or have this set to “None” which will update all. We’ll use None for this example.
  • Under Set Field Values for the Object Records, choose the Mass Touch (MassTouch__c) field.
  • Value: 1. (If you’re running the mass touch on this object again in the future you can change this value to 2, or 3 and keep cycling the number for each mass touch you want to complete, it just needs to change from its previous value).
  • Select Done.

With the flow configured, we now need to save it and name it.

  • Select Save from the top right.
  • Provide a Name and Description for the flow so that you will be able to identify it in the future should you wish to run it again.

Validation rule management

Before running a mass touch job on the object we need to consider validation rules. Flows do not have an elegant way to bypass validation rules so keep an eye out for any flow errors when the job runs. An email will be sent to your own user account’s email address containing the errors faced during the flow allowing you to manually correct the conflicting errors (see image below).

If there are some common offending validation rules consider disabling them temporarily to complete this task but remember to re-enable them when you’re done!

Run the flow

The flow is now fully configured and ready to be activated. Use the Activate button in the top right to enable the flow. The mass touch operation will begin when the scheduled time configured in the Start element is met.

If you don’t want to wait for the scheduled time you can run the flow immediately by selecting Debug. Make sure that you uncheck “Run flow in rollback mode” to ensure that the mass touch’s changes persist after the flow has been executed.

Remember, check your emails after the flow has run to review any errors as the update can roll back multiple records even if a single record fails.


We hope that you have found this article to be helpful for your organization and just a reminder to be vigilant when running mass update operations in a production environment, where possible backup your data first.

If you’d like to find out more about what ProvenWorks do, check out our homepage.

When is a Salesforce Field no longer useful?

One of the many advantages of CRM systems is that it is very easy for business users to add additional fields. One of the main disadvantages of CRM systems is that it is very easy for business users to add fields without giving thought to the ‘bigger picture’.

In this blog post we will discuss our approach for determining when to remove fields from the system.

TLDR: The basic principal is that when a field is no longer trustworthy, is there any point in keeping it in your production system?

Can we trust this field?

A field is untrustworthy if you (and anyone else) cannot find a consistent process for populating or maintaining that field despite reasonable efforts to do so.

Let’s take an example of an Account field that says “Number of employees at this location” and some of the questions we’d ask to determine how much we trust it:

1. Who collects/populates it?

Is it the Sales Rep? Is it the marketing department? Is it from a third party data provider?
– Don’t know and can’t find out? Back it up and delete it!

2. When is it collected?

At time of record creation? When the Account becomes a customer? When there’s an opportunity?
– Nobody knows? Remove it.

3. When was it first ‘put into production’?

This is vital if we are doing any kind of historical analysis.
– If the field is only there for trend analysis and the data appears patchy then it’s useless.

4. When was it last updated?

What’s the worst case?
– No historical value? Lose it.

5. Do we actually understand what the field means?

Is it unambiguous?
– If no one can understand what the field means, it’s useless.

Do we understand what the field means?

If you’re interested in a deep dive on field naming, see our previous post on field naming conventions.

In short, what we must do is look at the field name/label and determine if the question used to populate the field is at all ambiguous.

Even if the IT/IS/Sales Ops department understands the meaning of the field, do the people who are actually populating it? What about users who primarily speak different languages?

We work with a number of organizations and see lots of different field naming conventions. An example of an ambiguous field we saw recently was on an account that simply said “MSP Customer”. MSP means Managed Service Provider. Due to the client’s channel sales model, it was not clear if this field meant the account was a “Managed Service Provider” themselves or if this account was effectively an indirect customer since they purchased through a Managed Service Provider.

If we cannot understand what the field means, we cannot fulfil its purpose. Ultimately this not only wastes time but can also lead to inaccurate reporting, misinterpretation and misinformed business decisions.

What is the purpose of having a field anyway?

Let’s finish up by reminding ourselves of the basics. We believe there are two key purposes for having a field in a CRM:

  1. To mark a record as being at a certain stage in a process and/or
  2. To record data for later analysis (reporting).

If a given field is not adding any value to a current business process and it has no historical value for the reasons listed above, back it up and delete it before someone makes a business decision based on its content!

About ProvenWorks

We mean it when we say we’re Salesforce experts. We work exclusively in the Salesforce ecosystem and our products are built 100% for Salesforce.

Field naming convention for Salesforce and database tables

Joel Mansford is the Founder and Managing Director of ProvenWorks. With many years experience working within Sales & Marketing building databases, reports and customising CRM systems, he considers himself a techie at heart.

In this blog I’ll take you through a variation of a field naming convention for Salesforce that we use at ProvenWorks and recommend for use within Salesforce systems. To be clear we’re talking about the API name (or underlying table names) and not the labels that are exposed to users.

I want to stress that the most important thing is to have a field naming convention; what that convention actually is is of secondary importance. You will know that you have a good convention when you get the field/API name correct >90% of the time without actually knowing the table well simply because you know how it would have been named by its purpose.

Name your fields

If you’re reading this then you’ve probably already determined that you need to think about the naming convention of your fields. On a new field creation, Salesforce first asks for the label. It then creates an API name based on this label. This means you can get:

Label=”Last 2 letters of Mother’s maiden Name”,

API name = “Last_2_Letters_of_Mother_s_maiden_Name__c”

Note that the spaces and apostrophe all become underscores, making the ‘s’ stand on its own and the API name overly long (those underscores aren’t helping us at all). Also if the label changes (e.g. “2”->”two”) then confusion ensues because developers using this ‘convention’ will always assume that the API name matches the label.

Naming conventions

Field names should use Upper Camel Case or Pascal casing. Here’s an excerpt from

“CamelCase (also spelled “camel case” and sometimes known as medial capitals[1]) is the practice of writing compound words or phrases in which the words are joined without spaces and are capitalized within the compound — as in LaBelle, BackColor , or iMac. The name comes from the uppercase “bumps” in the middle of the compound word, suggestive of the humps.”

Try to avoid using abbreviations unless they are very widely used i.e. Id is fine instead of Identifier, Pro_Serve is not a good abbreviation of Professional Services as it is not consistent in its number of characters for each word and “Serve” is not a known abbreviation of “Services”. We’ll get onto verbosity in naming later.

Acronyms are treated as if they were words, so SFDC Account Id would become SfdcAccountId – whilst this one isn’t intuitive it is necessary to properly separate the words out correctly. If SFDC were treated as all capitals, many tools would (like SSRS) convert it to “S F D C Account Id” which is ‘more’ wrong. In this example, “SalesforceAccountId” would be best.


The underscore (_) character can be used after a field prefix to logically group fields together, for example:

  • Software_Amount becomes Amount_Software
  • Maintenance_Amount becomes Amount_Maintenance
  • Training_Amount becomes Amount_Training

This grouping means that when fields are sorted alphabetically, logically-related fields appear together – very useful in implementations with >150 fields! Grouping is easily achieved by slight word re-arrangement. For example, although we would usually verbally say “Date Created” here we name the field CreatedDate.

Grouping (underscore) isn’t absolutely necessary for field ‘pairs’ for example CreatedBy and CreatedDate or Contact.AccountId and Contact.AccountType (as the two fields will always be populated together). However, if you anticipate other fields being added to this theme then grouping can be beneficial and makes reviewing the column/field list much easier.

Choice of words & verbosity

This blog only outlines the conventions for naming. The actual choice of words is a task best proposed by one individual and then reviewed by others to ensure that the words are a good reflection of the purpose of the field without extra clarification. It is very rare that an individual gets the field naming right alone. Bounce the ideas and then finally make sure the actual text is reviewed for typos(!).

However great the temptation to “quickly create a field”, you should always resist. Spending literally a few minutes carefully considering the naming will at a minimum save you time later and more importantly prevent a field being misinterpreted and thus wrong business decisions being made.

Remember it is better to be unambiguous and have a long field name than a short field name that is open to interpretation. If a developer complains that it takes too long for them to type a long field name then I’d suggest they’re in the wrong profession if they struggle typing a dozen extra characters.

About ProvenWorks

We mean it when we say we’re Salesforce experts. We work exclusively in the Salesforce ecosystem and our products are built 100% for Salesforce.

Data Cleaning 101: 4 common data problems solved

This is the third instalment of our Data Cleaning 101 mini-series. So far we’ve explored why clean data is essential in Salesforce, and introduced 5 steps to start you on your journey to cleaner data. Now it’s time to dive into the details! We’re going to examine the different problems you probably have with your data and take a closer look at some of our favorite ways to tackle them!

The problems

  1. users entering bad data
  2. unstandardized data and messy reports
  3. duplicates and more duplicates
  4. inaccurate old data

1. Users entering bad data 

It’s great if you prioritize clean data, but if your users can still input bad data with no barriers, we’re back to square one. 

For this problem, we’re looking for solutions that will clean data at the point of entry.

Validate fields to meet set criteria before the data gets saved to your database. Design them carefully and with the end-user experience in mind.

  • Use alternative data types

Do you need to have everything as a text field? Could picklists, radio buttons, or checkboxes be used instead for the data you’re storing?

If you’re looking at AppExchange solutions, use our five step guide from Part 2 to assess whether the solution resolves your specific problem.

💡 It’s important to consider everybody across your Salesforce organization and make changes that also consider the end-user experience.

2. Unstandardized data and messy reports

As basic as it sounds, ensuring that your data is uniform and standardized will save you time on all kinds of tasks.

For example, if you create a report to find accounts based in Mississippi, imagine if all your records show Mississippi instead of MS or Mississippi, and its various misspellings. No more bloated field filtering, just clean data with uniform reports! 

For this problem, we’re focusing on ways we can better organize our data.

  • Use international standards

Use internationally-recognized standards to segment your data in the best way possible. They’re designed to be consistent and globally understood!

  • Run standardization tasks

Consider exporting data to standardize it with your favorite tools then reimport it back into your organization, or look into ways you can automate it within Salesforce.

  • Apps like IndustryComplete* will streamline industry categorization, SimpleImport* can speed up those importing tasks, and AddressTools* will start standardizing your address data immediately! (There are free versions available too for some of our tools* to get you up and running for no cost!)

3. Duplicates and more duplicates

If your role involves importing and entering data, duplicates are a constant battle. Dealing with duplicate information or “dupes” is a huge blackhole of time for admins and fundamentally for your business too.

Find a solution that stops or at least clearly alerts you to duplicated data. Have you implemented Duplicate Rules in Salesforce? Or is there a budget for a dedicated duplicate check tool? You have some great options out of the box and also on the AppExchange.

  • If you’re importing data, Excel and Google Sheets have some great duplicate handling tools available. Try them out before importing your data into Salesforce.

  • Duplicate rules in Salesforce are not to be sniffed at! They can be a great free feature if you’re on Professional, Enterprise, Performance, Unlimited, and Developer Editions.

  • And of course, we love SimpleImport Premium’s* multi-field matching to help identify existing records easier, preventing duplicates from being inserted and saved.

4. Inaccurate old data

Up to 40% of user productivity is lost to incomplete, inconsistent data. Sorting bad data from good wastes 50% of users’ time. These stats alone show how existing bad data is bad for business.

Removing or validating existing data is essential for your users to remain focused and productive. If some data is now redundant, archive or delete it from your system altogether, but if it’s still in use, correct it by updating and validating it. There are plenty of applications that not only draw your attention to the incorrect data, but give you the option to update, change or delete it.

  • Schedule regular spring cleans of your org – regularly diving into your data is the best way to ensure your data is up to date, and to spot any blackholes of outdated information! Checking through your data every two weeks is best as our Operations Director, Beth, recommends in our 5 step guide (Part 2).
  • Tools that have access to international databases can be an invaluable addition to your org, i.e. looking up address data against postal authorities.


Now you have the tools and wisdom to win the war against bad data. Let’s recap the main 4 data problems that we’ve managed to tackle, they are:

  • users entering bad data
  • unstandardized and messy data
  • the dreaded duplicates
  • inaccurate old data 

Hopefully, with the facts and resources presented in this final instalment of our Data Cleaning 101 mini-series, you can take full control of your org and clean up your bad data!

Data Cleaning 101: a round up

So now it’s the end of our little crusade against unclean data. But first let’s go over what we’ve added to your data cleaning arsenal.

Data Cleaning 101 has provided you with:

  • a reality check about maintaining clean data in your CRM (blog one),
  • 5 tips to remember before purchasing a solution to figure out what you want to get out of your data (blog two),
  • 4 specific data problems you might have and how to solve them.

We’d love to hear your stories with bad data and the applications that saved your data. Are there any tips you think we should know about? Tell us!

Or, whether you’ve simply enjoyed learning about data and data cleaning let us know, we’d love to hear from you.

I want to start cleaning up my data!

Got issues with your data in your Salesforce org that you’re ready to tackle? Get in touch and see if one (or more) of our solutions can help clean the dirty data stinking up your CRM. 

Data Cleaning 101: 5 steps to start cleaning your data

If you’ve been following our series so far, you will have learned all about why you should care about data cleansing. Did you know that bad data can cost companies up to a quarter of their revenue? So, cleaning bad data is paramount for good data practice. But for more reality checks, catch up on the first instalment: why should you care about data cleansing? 

Cleaning bad data can be tricky

This article presents 5 essential steps for you to better understand your org, your data, and how to start the data cleansing process. By following these steps, you can go forward and make well-informed decisions for your company. You may even discover some quick wins along the way.

Follow through these 5 steps to begin your journey to cleaner data.

1. Ask yourself: why do I need this data?

It may sound broad, but knowing what you want to get from your data is vital. So take a step back and look at your end goal. Are you collecting all of the information you need? Or maybe you’re collecting too much? After all, there’s no point collecting data if you’re never going to use it. One less field you’re capturing is one less field you need to maintain!

Start by creating and establishing a plan outlining what clean data means for you and your org. You can’t do anything until you understand your org and users. 

For some prompts, start here:

  • What data do I need to achieve my business goals?
  • Am I capturing everything I need to succeed?
  • Am I collecting some data for the sake of collecting data?

2. Examine where your data is coming from

It’s time to identify all the different ways that data is entering your org. It’s likely that when you dig deeper, there will be more entry points than you think!

A few entry points to consider:

  • service agents
  • sales reps
  • partner users
  • web-to-lead forms
  • integrations

Once you’ve made your list and found the holes, you can do something about them!

3. Plug the holes to clean the data

I saw a great analogy on Reddit:

I think of improving data quality as a sinking boat. If you are sinking, you need to plug the holes first (sources of bad data) and then start bailing out the water (getting rid of the bad data) second.”


Simply put, if data enters your org clean, this saves you from battling with your records later on. This should be the initial focus. Thankfully there are lots of useful ways to make this a reality with minimal effort that don’t impede the end user experience.

Where possible, think of picklists, validation rules, or even consider managed packages available from the AppExchange.

4. Try the AppExchange!

Solutions on the AppExchange can be a great cost-effective, and time-efficient way to help you combat unclean data. Managed package providers are often experts in their field, so finding a ready-to-go data management solution will help you clean your data in no time!

So how do you choose the right app for you? This isn’t an exhaustive list, but we’ve included a data cleansing focused checklist to ask the managed package providers:

  1. Does the solution automatically resolve data quality issues?
  2. Does the solution require changes to the user experience?
  3. Does the solution clean all my data entry points?
  4. Does the solution clean up pre-existing data?
  5. How long does it take to implement the solution?

After all, don’t reinvent the wheel when it’s already turning – if you have an issue, it’s likely that hundreds of others do too!

5. Start building good data habits for cleaner data

So you’ve taken a good look at your business, identified where your data is coming from, and even potentially put some actions in place to begin cleaning up your org. Now it’s time to maintain this good practice as your org grows and share your wisdom.

Here are a few ways you can build good data habits in your organization:

  • Get users involved and trained in what to look out for
  • Identify entry points that are inputting bad data
  • Schedule regular manual checks for your data 

Operations Director, Beth, is our resident data cleaner and a strong believer in the power of good data practice:

“There is nothing more satisfying than having a good old spring clean in Salesforce! It’s great to identify trends and get to the root cause of how bad data is able to get in. Popping in a scheduled time every two weeks is the way I’ve found works best. Don’t let it build up and be a task you keep on putting off!”

Beth Clements, ProvenWorks

We live in a ‘need it done yesterday’ society so safeguarding your org against the perils of bad data is a sure way to give you back your time and let you focus on the things that matter.

Remember: the best day to start cleaning your data was yesterday. The second best day is today.

Life’s better after cleaning bad data from your org.

Stay tuned for the final instalment to our data blog series to learn about some of our favourite cleaning apps.

See you back here soon for the final instalment of the series!

Data Cleaning 101: Why should you care about data cleansing?

Unclean data stinks up your system. It can travel through your business, slowing you down, causing confusion and costing you money. Don’t believe us? Between 2019-2021, over 40% of sales reps did not have enough information about leads and accounts to make effective sales, and there are plenty more stats around data cleansing coming up.

This is the first in our three-part series taking you through the importance of data cleansing and some top tips for practising clean data habits… so make sure to stick around!

An introduction to data cleansing

Bad data… Does it really matter? How do I clean it? And what can I do about it in the future?

Don’t worry we have you covered with the ‘whats’, ‘whys’ and ‘hows’ of dirty data throughout this series. First off a few definitions…

Clean data: Data that is without error and in its entirety so it can be used effectively by everyone.

Data cleansing: The process of identifying incomplete, incorrect, inaccurate or irrelevant data and modifying, validating, deleting or replacing it.

So how does unclean data actually affect me?

Over the past 6 years Salesforce and Salesforce bloggers have noted the persistent and holistic impact that bad data has on companies of all sizes. Let’s take a look at some numbers:

As we’ve already discussed, between 2019-2021, over 40% of sales reps did not have enough information about leads and accounts to make effective sales, and therefore struggled to achieve monthly and annual sales targets. This inevitably affects generated revenue.

Why is data cleansing important?

  1. Ensure consistency, validity and confidence in your data by cleansing it regularly. This foundation can yield better results and help reach goals quicker.
  1. Reduce the time employees spend sorting through bad data and let users be more productive on the tasks that matter – like growing your business or organization!
  1. Eradicate data privacy worries for customers and clients. It’s outright good practice; you wouldn’t want your private mail sent to an incorrect address!
  1. Protect your company’s reputation. Saving bad or incomplete information affects all areas of your company, from sales and marketing to the senior management, stopping everyone from carrying out their job well and making beneficial decisions, which directly affects the reputation of your business and its revenue.

The good news

Not all is lost! The University of Texas has estimated that even if the data entered is 10% more accurate, then your revenue would be considerably boosted not only for bigger enterprises but also for B2B and B2C firms.

So now you know what data cleansing is, how do you practice it?

Watch out for the next post in Data Cleaning 101 for our five best tips for cleansing your Salesforce org. You don’t want to miss that one!

Want to learn more about the impact of poor data?

Check out our blog: What is the impact of poor quality data? 

What is the impact of poor quality data?

Data is an incredibly important asset to any business. Bad quality data is not just a time sink to remedy, it can also cause active damage to an organization. Starting with good quality data is key, especially in Salesforce where qualifying leads, assigning key accounts, and ensuring cases are routed to the correct queues are paramount.

What are the consequences of bad data?

  • Damage to relationships with clients and partners
  • Lowered credibility
  • Poor productivity
  • Poor decision making
  • Lost revenue

How can I get accurate data in my Salesforce Organization?

Salesforce provides various ways to keep data accurate such as validation rules, required fields, workflow rules, and duplicate management rules. We believe that good data is the basis of success. This means correct data at the point of entry, and where that’s not possible, it means having the ability to highlight and intelligently cleanse poor data.

We offer a range of solutions for the Salesforce CRM platform to promote and assist accurate data entry, to provide methods to cleanse your data, integrate it, and help keep your business compliant with GDPR.


AddressTools can verify address data both at the point of entry, or as a clean up task. With the data being verified against local postal authorities in over 240+ countries, you can rely on AddressTools to ensure that Salesforce is your master source of knowledge. Ensure that users in your Salesforce Organization are spending their time efficiently, rather than fixing the issues caused by poor quality address data.


IndustryComplete uses NAICS, SIC and iSIC classification systems to verify industries in Salesforce. With an easy-to-use component, users can search for an industry using keywords which will populate the standard Salesforce industry field. Having an interactive and easy to use component to enter in industry data not only saves users time, but ensures accurate data is entered.


The Managed Import component from SimpleImport allows users to import data into Salesforce directly, but safely into predefined fields, mapped by an administrator. Standard users are empowered with streamlined import functionality, but with strict, customizable controls to ensure security and data integrity.


PhoneTools allows you to screen against UK TPS and CTPS lists to assist with PECR compliance, all within Salesforce with no coding required. Manually check numbers when you need them or automate screening to keep you up-to-date.

For more information on our products discussed above please check out our AppExchange listings for AddressToolsIndustryCompleteSimpleImport and PhoneTools.