64 min readIntelligence & insights

Implementation Guide: Analyze donor retention, lapsed donor trends, and campaign roi

Step-by-step implementation guide for deploying AI to analyze donor retention, lapsed donor trends, and campaign roi for Non-Profit Organizations clients.

Hardware Procurement

Staff Workstation Laptop

DellLatitude 5450 (i5-1345U, 16GB RAM, 256GB SSD)Qty: 3

$850 per unit via TechSoup nonprofit discount / $1,050 suggested resale to client

Primary workstation for fundraising staff accessing cloud-based CRM (Bloomerang), Power BI dashboards, and Dataro analytics platform. 16GB RAM recommended for running multiple browser tabs with BI dashboards alongside CRM and email marketing tools simultaneously.

External Monitor

DellP2425H 24-inch IPS 1080pQty: 3

$190 per unit / $240 suggested resale

Secondary display for side-by-side viewing of CRM donor records and Power BI dashboards during analysis sessions. Particularly useful when staff are comparing campaign performance data across multiple reports.

Wireless Access Point

UbiquitiUniFi U6 Lite (U6-Lite)Qty: 1

$100 per unit / $150 suggested resale

Business-grade Wi-Fi access point ensuring reliable connectivity for cloud-based SaaS tools. All primary platforms (Bloomerang, Power BI, Dataro) are browser-based and require stable internet connectivity. Replaces consumer-grade routers common in small nonprofit offices.

Managed Network Switch

UbiquitiUniFi Switch Lite 8 PoE (USW-Lite-8-PoE)Qty: 1

$110 per unit / $160 suggested resale

Powers the UniFi AP via PoE and provides wired Ethernet connections for workstations in the office, ensuring maximum throughput for large data exports and Power BI report rendering.

External Backup Drive

Western DigitalWD My Passport 5TB USB 3.0 (WDBPKJ0050BBK)Qty: 1

$130 per unit / $170 suggested resale

Local backup of exported donor data files before and after migration. Serves as an air-gapped recovery point in case of data migration issues. Encrypted with BitLocker or VeraCrypt for PII protection.

Software Procurement

Bloomerang CRM

Bloomerang

$125–$200/month depending on record count; includes CRM + fundraising modules

Core donor database and retention tracking platform. Provides the engagement scoring engine, donor timeline, retention rate dashboard, and giving history that serve as the foundation for all analytics. Purpose-built for donor retention with an always-visible retention gauge on the dashboard.

Microsoft Power BI Pro

MicrosoftPer-seat SaaS - nonprofit discounted pricing

$3/user/month for eligible nonprofits (vs. $10 commercial) via Microsoft nonprofit program

Business intelligence and data visualization platform for building custom donor retention dashboards, lapsed donor trend charts, campaign ROI scorecards, and executive summary reports. Connects to Bloomerang data via API/exported datasets.

Dataro Predict

DataroSaaS - monthly subscription

$499/month for Core platform; ProspectAI add-on at $250/month

AI/ML prediction engine that analyzes historical donor data to predict churn risk, donor lifetime value, optimal ask amounts, and campaign response likelihood. Integrates directly with Bloomerang to score donors and generate targeted segment lists for re-engagement campaigns.

Microsoft 365 Business Basic for Nonprofits

MicrosoftBusiness Basic (Nonprofit Donated Tier)Qty: Up to 300 users

$0/user/month (donated tier); Business Standard at $3/user/month if needed

Provides Exchange Online email, Teams collaboration, SharePoint document storage, and OneDrive. Used for team communication, storing analytics documentation, and sharing Power BI reports internally via Teams channels.

Mailchimp Standard

Intuit MailchimpStandard

Free tier for up to 500 contacts; Standard plan from $13.99/month for larger lists

Email marketing platform for executing donor re-engagement campaigns, lapsed donor win-back sequences, and campaign solicitations. Campaign performance data (open rates, click rates, conversion rates) feeds back into the ROI analysis layer in Power BI.

QuickBooks Online

IntuitSimple Start (recommended for small nonprofits)

$17.50–$37.50/month (nonprofit pricing varies)

Accounting system that integrates with Bloomerang for financial reconciliation. Ensures donation revenue matches between CRM and general ledger, enabling accurate campaign ROI calculations that account for true costs (printing, postage, platform fees, staff time).

Zapier Professional

ZapierProfessional plan, 750 tasks/month

$19.99/month

Integration platform connecting Bloomerang, Mailchimp, QuickBooks, Power BI, and Dataro where native integrations are unavailable. Automates data flows such as: new donation in Bloomerang → update QuickBooks → trigger Power BI dataset refresh → log in Google Sheets backup.

DonorSearch Enhanced CORE

DonorSearchQty: 2,000–5,000 contact screens

$3,000–$8,000/year (Phase 3 add-on)

Prospect research and wealth screening database that enriches donor records with giving capacity, philanthropic affinity scores, and public giving history. Enables the AI layer to make more accurate predictions about major gift readiness and upgrade potential. Deferred to Phase 3.

Microsoft Azure Nonprofit Credits

MicrosoftAnnual grant - usage-based cloud credits

$2,000/year in donated Azure credits for eligible nonprofits

Provides cloud compute and storage for hosting a small Azure SQL Database or Azure Data Lake if the client outgrows direct Power BI imports and needs a centralized data warehouse for advanced analytics queries across multiple data sources.

Prerequisites

  • Client must be a registered 501(c)(3) nonprofit organization (required for Microsoft nonprofit pricing, TechSoup hardware discounts, Salesforce Power of Us eligibility, and other nonprofit technology grants)
  • Register the organization on TechSoup (https://www.techsoup.org/join) to access nonprofit hardware and software discounts — registration and validation typically takes 2–4 weeks, so initiate this immediately
  • Existing donor data must be available in an exportable format: CSV, Excel spreadsheet, or export from a legacy CRM (e.g., Raiser's Edge, eTapestry, Network for Good, or even a FileMaker database). Minimum data required: donor name, contact info, gift date, gift amount, campaign/fund designation, and payment method
  • Minimum 25 Mbps download / 5 Mbps upload internet connection at the nonprofit office — all platforms are cloud-based SaaS and require reliable connectivity
  • Client must designate a primary point of contact (ideally the Development Director or Database Manager) who has authority over donor data and can approve data migration, field mappings, and campaign taxonomy decisions
  • A complete inventory of current fundraising campaigns, funds, and appeals with standardized naming conventions — or willingness to establish these during Phase 1
  • Email marketing platform credentials (Mailchimp, Constant Contact, or equivalent) with admin access for integration configuration
  • QuickBooks Online (or equivalent accounting system) credentials with admin access for integration with Bloomerang
  • Web browser requirements: Google Chrome 120+, Microsoft Edge 120+, or Mozilla Firefox 120+ on all workstations — Power BI and Bloomerang require modern browser engines
  • Written authorization from the nonprofit's Executive Director or Board authorizing the MSP to handle donor PII (Personally Identifiable Information) during data migration and system configuration — essential for compliance documentation
  • Existing domain name and DNS access for configuring email authentication (SPF, DKIM, DMARC) which is required for Mailchimp deliverability and anti-phishing compliance
  • Document any existing data sharing agreements, privacy policies, or donor communication preferences that must be honored during migration

Installation Steps

...

Step 1: Nonprofit Technology Registration and Account Provisioning

Register the client organization across all nonprofit technology programs to unlock discounted and donated licensing. This must happen first as some validations take days or weeks. Register on TechSoup (if not already registered), apply for Microsoft Nonprofit Program, and create vendor accounts for Bloomerang, Dataro, and DonorSearch.

  • TechSoup Registration: Navigate to https://www.techsoup.org/join
  • Complete organization profile with EIN, NTEE code, and mission statement
  • Upload 501(c)(3) determination letter
  • Microsoft Nonprofit Program: Navigate to https://nonprofit.microsoft.com/getting-started
  • Register with organization EIN and TechSoup validation token
  • Once approved, access Microsoft 365 Admin Center at https://admin.microsoft.com
  • Bloomerang Trial/Account: Navigate to https://bloomerang.co/pricing/
  • Request demo and pricing for client's record count
  • Note: Bloomerang offers free data migration assistance for new customers
Note

TechSoup validation can take 2–14 business days. Start this process on Day 1 of the engagement. Microsoft nonprofit validation requires a separate process and typically takes 3–10 business days after TechSoup validation is confirmed. Do NOT wait for all validations before proceeding with data audit (Step 2).

Step 2: Donor Data Audit and Quality Assessment

Export all donor data from the client's current system and perform a comprehensive data quality audit. This is the single most critical step — the quality of donor data directly determines the accuracy and usefulness of all downstream analytics and AI predictions. Identify duplicates, missing fields, inconsistent naming conventions, and data integrity issues.

Automated donor data quality assessment using Python pandas — run on MSP workstation
python
# Export donor data from legacy system to CSV
# If using Excel, save as UTF-8 CSV to preserve special characters

# Use Python pandas for automated data quality assessment (run on MSP workstation)
pip install pandas openpyxl

python3 << 'EOF'
import pandas as pd
import numpy as np

# Load donor data
df = pd.read_csv('donor_export.csv')

# Basic quality metrics
print('=== DONOR DATA QUALITY REPORT ===')
print(f'Total records: {len(df)}')
print(f'\nColumn completeness:')
for col in df.columns:
    completeness = (df[col].notna().sum() / len(df)) * 100
    print(f'  {col}: {completeness:.1f}% complete')

# Check for duplicates by name + email
if 'Email' in df.columns and 'Last Name' in df.columns:
    dupes = df.duplicated(subset=['Email'], keep=False)
    print(f'\nPotential duplicate emails: {dupes.sum()} records')

# Check for duplicates by name
if 'First Name' in df.columns and 'Last Name' in df.columns:
    name_dupes = df.duplicated(subset=['First Name', 'Last Name'], keep=False)
    print(f'Potential duplicate names: {name_dupes.sum()} records')

# Giving history analysis
if 'Gift Amount' in df.columns:
    print(f'\n=== GIVING SUMMARY ===')
    print(f'Total gifts: {len(df)}')
    print(f'Average gift: ${df["Gift Amount"].mean():.2f}')
    print(f'Median gift: ${df["Gift Amount"].median():.2f}')
    print(f'Gifts with $0 or negative: {(df["Gift Amount"] <= 0).sum()}')

if 'Gift Date' in df.columns:
    df['Gift Date'] = pd.to_datetime(df['Gift Date'], errors='coerce')
    invalid_dates = df['Gift Date'].isna().sum()
    print(f'Records with invalid/missing dates: {invalid_dates}')
    if df['Gift Date'].notna().any():
        print(f'Date range: {df["Gift Date"].min()} to {df["Gift Date"].max()}')

# Campaign/Fund analysis
if 'Campaign' in df.columns:
    print(f'\n=== CAMPAIGN CODES ===')
    print(f'Unique campaigns: {df["Campaign"].nunique()}')
    print(f'Missing campaign: {df["Campaign"].isna().sum()}')
    print(f'\nTop 10 campaigns:')
    print(df['Campaign'].value_counts().head(10))

print('\n=== ACTION ITEMS ===')
print('Review this report with the client Development Director.')
print('Prioritize: deduplication, date standardization, campaign code cleanup.')
EOF
Note

This audit typically reveals 5–20% duplicate records in nonprofits that have been using spreadsheets. Document ALL findings in a written report for the client. The client's Development Director must review and approve data cleanup decisions — never delete or merge donor records without explicit written approval. Save the original unmodified export as a backup before any cleanup.

Step 3: Data Cleanup and Standardization

Based on the audit findings, systematically clean and standardize the donor data before importing into Bloomerang. This includes deduplication, standardizing campaign/fund codes, normalizing addresses, and ensuring gift records have complete required fields. Create a standardized campaign taxonomy document that the client approves.

Data cleanup script and campaign taxonomy document generator — customize based on audit findings
bash
# Data cleanup script — customize based on audit findings
python3 << 'EOF'
import pandas as pd
import re

df = pd.read_csv('donor_export.csv')

# 1. Standardize name fields
df['First Name'] = df['First Name'].str.strip().str.title()
df['Last Name'] = df['Last Name'].str.strip().str.title()

# 2. Standardize email (lowercase, strip whitespace)
if 'Email' in df.columns:
    df['Email'] = df['Email'].str.strip().str.lower()

# 3. Remove exact duplicate rows
original_count = len(df)
df = df.drop_duplicates()
print(f'Removed {original_count - len(df)} exact duplicates')

# 4. Standardize campaign codes (example mapping — customize per client)
campaign_mapping = {
    'annual appeal': 'Annual Appeal 2024',
    'annual fund': 'Annual Appeal 2024',
    'ann appeal': 'Annual Appeal 2024',
    'gala': 'Annual Gala 2024',
    'gala 2024': 'Annual Gala 2024',
    'gala dinner': 'Annual Gala 2024',
    'major gifts': 'Major Gifts',
    'online': 'Online General',
    'website': 'Online General',
    # Add client-specific mappings here
}
if 'Campaign' in df.columns:
    df['Campaign_Clean'] = df['Campaign'].str.strip().str.lower().map(campaign_mapping)
    df['Campaign'] = df['Campaign_Clean'].fillna(df['Campaign'])
    df = df.drop(columns=['Campaign_Clean'])

# 5. Standardize gift dates to ISO format
if 'Gift Date' in df.columns:
    df['Gift Date'] = pd.to_datetime(df['Gift Date'], errors='coerce').dt.strftime('%Y-%m-%d')

# 6. Flag records needing manual review
df['Needs_Review'] = False
df.loc[df['Email'].isna() | (df['Email'] == ''), 'Needs_Review'] = True
df.loc[df['Gift Amount'].isna() | (df['Gift Amount'] <= 0), 'Needs_Review'] = True

review_count = df['Needs_Review'].sum()
print(f'Records flagged for manual review: {review_count}')

# Save cleaned data
df.to_csv('donor_data_cleaned.csv', index=False)
df[df['Needs_Review']].to_csv('donor_data_needs_review.csv', index=False)
print('Cleaned data saved. Review flagged records with client before import.')
EOF

# Create campaign taxonomy document
cat << 'EOF' > campaign_taxonomy.md
# Campaign & Fund Taxonomy
# [Client Name] — Approved [Date]

## Campaign Codes
| Code | Description | Type | Active |
|------|-------------|------|--------|
| ANNUAL-2024 | Annual Appeal 2024 | Direct Mail + Email | Yes |
| GALA-2024 | Annual Gala 2024 | Event | Yes |
| ONLINE-GEN | Online General Giving | Digital | Yes |
| MAJOR-GIFTS | Major Gifts Program | Personal Solicitation | Yes |
| PLANNED-GIVING | Planned Giving / Bequests | Planned | Yes |

## Fund Codes
| Code | Description | Restriction |
|------|-------------|-------------|
| UNRESTRICTED | General Operating | Unrestricted |
| PROGRAM-A | [Program Name] | Restricted |
| ENDOWMENT | Endowment Fund | Permanently Restricted |

*Approved by: [Development Director Name] on [Date]*
EOF
Critical

CRITICAL: Get written sign-off from the Development Director on all merge/delete decisions for duplicate records. Some apparent duplicates may actually be separate entities (e.g., a donor who gives personally AND through their family foundation). Never merge without verification. The campaign taxonomy document becomes a living reference — store it in the client's SharePoint.

Step 4: Bloomerang CRM Provisioning and Configuration

Set up the Bloomerang CRM account, configure organizational settings, customize fields and dropdowns to match the approved campaign taxonomy, and prepare the system for data import. Configure user accounts with appropriate role-based access controls.

1
ORGANIZATION SETTINGS — Navigate to Settings > Organization. Enter nonprofit legal name, EIN, address. Upload organization logo (used in reports and acknowledgment letters). Set fiscal year start date (most nonprofits use Jan 1 or Jul 1). Set default currency to USD.
2
CUSTOM FIELDS — Navigate to Settings > Custom Fields. Add custom fields as needed: Donor Segment (dropdown): Major Donor, Mid-Level, Annual, First-Time, Lapsed; Acquisition Source (dropdown): Event, Online, Direct Mail, Peer Referral, Social Media; Lapse Risk Score (number): Will be populated by Dataro in Phase 3; Board Member (checkbox); Volunteer (checkbox).
3
CAMPAIGN & FUND SETUP — Navigate to Transactions > Campaigns. Create each campaign from the approved taxonomy document. Navigate to Transactions > Funds. Create each fund from the approved taxonomy document.
4
USER ACCOUNTS — Navigate to Settings > Users. Create accounts for each staff member. Assign roles: Admin: Development Director (full access); Standard: Fundraising staff (read/write donors and transactions); Read-Only: Executive Director, Board members (dashboard view only). Enable two-factor authentication for ALL users.
5
COMMUNICATION SETTINGS — Navigate to Settings > Communications. Configure default acknowledgment letter template. Set up email integration (connect organizational email).
Note

Bloomerang provides complimentary onboarding support for new customers — schedule this concurrently with MSP configuration. Bloomerang's onboarding team can handle complex field mapping during data import. Ensure two-factor authentication is enabled for all users before any donor data is imported — this is a compliance requirement for handling PII.

Step 5: Donor Data Import into Bloomerang

Import the cleaned and standardized donor data into Bloomerang CRM. Bloomerang provides a guided import wizard and free data migration assistance for new customers. Perform the import in a structured sequence: constituent records first, then giving history, then communication preferences.

  • PREPARE IMPORT FILES — Split cleaned data into separate files if needed: constituents.csv (name, address, email, phone, custom fields), transactions.csv (constituent ID, gift date, amount, campaign, fund, payment method), relationships.csv (household links, organizational affiliations)
  • USE BLOOMERANG IMPORT WIZARD — Navigate to Database > Import, select file type (CSV), upload constituents.csv first, map each column to the corresponding Bloomerang field, preview the import and verify 10–20 records manually, execute import, then repeat for transactions.csv
  • VERIFY IMPORT — Run Bloomerang's built-in duplicate check (Database > Find Duplicates), spot-check 20 random donor records against the original data, verify total gift counts and amounts match source data
Quick verification query
python
# export source data totals and compare against Bloomerang Dashboard after
# import

python3 << 'EOF'
import pandas as pd

# Compare source totals to imported totals
source = pd.read_csv('donor_data_cleaned.csv')

print('=== SOURCE DATA VERIFICATION TOTALS ===')
print(f'Total constituent records: {source["Email"].nunique() if "Email" in source.columns else len(source)}')
if 'Gift Amount' in source.columns:
    print(f'Total gift amount: ${source["Gift Amount"].sum():,.2f}')
    print(f'Total gift count: {len(source)}')
    print(f'Date range: {source["Gift Date"].min()} to {source["Gift Date"].max()}')
print('\nCompare these totals against Bloomerang Dashboard after import.')
EOF
Note

Bloomerang offers free data migration assistance — contact their support team at support@bloomerang.co to schedule a migration call. They can handle complex imports from legacy systems including Raiser's Edge, DonorPerfect, eTapestry, and spreadsheets. For large datasets (>10,000 records), the import may take 30–60 minutes. Do NOT navigate away from the import page during processing. Always run a test import with a small subset (50–100 records) first.

Step 6: Microsoft 365 and Power BI Pro Deployment

Deploy Microsoft 365 Business Basic (free nonprofit tier) and Power BI Pro licenses for the client organization. Configure the Microsoft 365 tenant, assign licenses, and set up the Power BI workspace that will host donor analytics dashboards.

1
MICROSOFT 365 TENANT SETUP: Log into Microsoft 365 Admin Center: https://admin.microsoft.com. Verify nonprofit licensing is applied (should show 'Microsoft 365 Business Basic for Nonprofits'). Assign licenses to all users.
2
POWER BI PRO LICENSE ASSIGNMENT: In Admin Center > Billing > Purchase Services. Search for 'Power BI Pro for Nonprofits' ($3/user/month). Purchase licenses for: Development Director, Database Manager, Executive Director. Assign licenses under Users > Active Users > [User] > Licenses.
3
POWER BI WORKSPACE CREATION: Navigate to https://app.powerbi.com. Sign in with admin credentials. Click Workspaces > Create a workspace. Name: '[Organization Name] Donor Analytics'. Description: 'Donor retention, lapsed donor trends, and campaign ROI dashboards'. Set workspace access: Admin: MSP service account, Development Director. Member: Database Manager. Viewer: Executive Director, other staff.
4
CONFIGURE DATA GATEWAY (if connecting to on-premise data): For cloud-only setups (recommended), no gateway needed. Power BI will connect to Bloomerang via REST API or scheduled CSV imports.
5
INSTALL POWER BI DESKTOP (on MSP and client workstations): Download from: https://powerbi.microsoft.com/desktop/
Install Power BI Desktop via winget on Windows
shell
winget install Microsoft.PowerBIDesktop
Note

Power BI Pro at $3/user/month for nonprofits is one of the best values in the analytics space. Ensure the nonprofit admin approves the Microsoft 365 tenant before purchasing Power BI licenses — they must be within the same tenant. The MSP should create a dedicated service account (e.g., powerbi-admin@clientdomain.org) for managing scheduled data refreshes and dashboard administration. This account should have MFA enabled.

Step 7: Power BI Data Connection and Donor Retention Dashboard Build

Connect Power BI to Bloomerang donor data and build the three core dashboards: (1) Donor Retention Rate tracker, (2) Lapsed Donor Trends analysis, and (3) Campaign ROI scorecard. Use Bloomerang's data export or API to feed Power BI, then build interactive visuals with drill-down capability.

  • OPTION A: Scheduled CSV Export (simpler, recommended for Phase 2)
  • In Bloomerang, navigate to Reports > Custom Reports
  • Create a report with: Constituent Name, Email, First Gift Date, Last Gift Date, Total Giving, Gift Count, Largest Gift, Last Gift Amount, Engagement Level
  • Export as CSV
  • Place in a OneDrive/SharePoint folder for auto-refresh in Power BI
  • OPTION B: Bloomerang API (advanced, for automated refresh)
  • Bloomerang REST API documentation: https://bloomerang.co/features/integrations/api/
  • API calls require an API key from Bloomerang Settings > Integrations > API
Sample Python script to extract Bloomerang data for Power BI
python
python3 << 'PYEOF'
import requests
import pandas as pd
import json
from datetime import datetime, timedelta

# Bloomerang API configuration
API_KEY = 'YOUR_BLOOMERANG_API_KEY'  # Get from Bloomerang Settings > API
BASE_URL = 'https://crm.bloomerang.co/api/v2'
HEADERS = {
    'X-API-KEY': API_KEY,
    'Content-Type': 'application/json'
}

def get_all_constituents():
    constituents = []
    skip = 0
    take = 50  # Bloomerang API max per page
    while True:
        resp = requests.get(f'{BASE_URL}/constituents?skip={skip}&take={take}', headers=HEADERS)
        data = resp.json()
        if not data.get('Results'):
            break
        constituents.extend(data['Results'])
        skip += take
        if skip >= data.get('Total', 0):
            break
    return constituents

def get_transactions(start_date, end_date):
    transactions = []
    skip = 0
    take = 50
    while True:
        resp = requests.get(
            f'{BASE_URL}/transactions?skip={skip}&take={take}&minDate={start_date}&maxDate={end_date}',
            headers=HEADERS
        )
        data = resp.json()
        if not data.get('Results'):
            break
        transactions.extend(data['Results'])
        skip += take
        if skip >= data.get('Total', 0):
            break
    return transactions

# Extract data
print('Fetching constituents...')
constituents = get_all_constituents()
print(f'Retrieved {len(constituents)} constituents')

# Get last 5 years of transactions
end_date = datetime.now().strftime('%Y-%m-%d')
start_date = (datetime.now() - timedelta(days=5*365)).strftime('%Y-%m-%d')
print(f'Fetching transactions from {start_date} to {end_date}...')
transactions = get_transactions(start_date, end_date)
print(f'Retrieved {len(transactions)} transactions')

# Save as CSV for Power BI import
const_df = pd.json_normalize(constituents)
const_df.to_csv('bloomerang_constituents.csv', index=False)

trans_df = pd.json_normalize(transactions)
trans_df.to_csv('bloomerang_transactions.csv', index=False)
print('Data exported for Power BI import.')
PYEOF
Note

For the initial build, the CSV export method is faster and more reliable. Transition to API-based automated refresh in Phase 3 when the client is comfortable with the dashboards. Bloomerang API rate limits apply — the API uses pagination with a max of 50 records per call. For large databases, the extraction script may take 10–30 minutes. Schedule API extractions during off-hours. Store the Power BI Desktop (.pbix) files in the client's SharePoint for version control.

Step 8: Build Core Power BI Dashboards

Using Power BI Desktop, build three interactive dashboards that deliver the core analytics: donor retention rates, lapsed donor identification, and campaign ROI measurement. These dashboards will be published to the Power BI Service workspace for browser-based access by all authorized staff.

Data Model

  • Tables to import: Constituents (from Bloomerang export), Transactions (from Bloomerang export), Campaigns (manual entry or from Bloomerang), Calendar (auto-generated date table)
  • Relationship: Transactions[ConstituentId] → Constituents[Id] (many-to-one)
  • Relationship: Transactions[CampaignId] → Campaigns[Id] (many-to-one)
  • Relationship: Transactions[GiftDate] → Calendar[Date] (many-to-one)

DAX Measures

Create these measures in Power BI Desktop > Modeling > New Measure.

DAX Measure 1: Donor Retention Rate
dax
-- 1. DONOR RETENTION RATE
-- Retention Rate = (Donors who gave in both current and prior year) / (Donors who gave in prior year)
RetentionRate = 
  VAR CurrentYear = YEAR(TODAY())
  VAR PriorYearDonors = 
    CALCULATETABLE(
      VALUES(Transactions[ConstituentId]),
      YEAR(Transactions[GiftDate]) = CurrentYear - 1
    )
  VAR RetainedDonors = 
    CALCULATETABLE(
      VALUES(Transactions[ConstituentId]),
      YEAR(Transactions[GiftDate]) = CurrentYear,
      Transactions[ConstituentId] IN PriorYearDonors
    )
  RETURN
    DIVIDE(COUNTROWS(RetainedDonors), COUNTROWS(PriorYearDonors), 0)
DAX Measure 2: Lapsed Donor Count
dax
-- 2. LAPSED DONOR COUNT
-- Donors who gave in prior year but NOT in current year
LapsedDonorCount = 
  VAR CurrentYear = YEAR(TODAY())
  VAR PriorYearDonors = 
    CALCULATETABLE(
      VALUES(Transactions[ConstituentId]),
      YEAR(Transactions[GiftDate]) = CurrentYear - 1
    )
  VAR CurrentYearDonors = 
    CALCULATETABLE(
      VALUES(Transactions[ConstituentId]),
      YEAR(Transactions[GiftDate]) = CurrentYear
    )
  RETURN
    COUNTROWS(EXCEPT(PriorYearDonors, CurrentYearDonors))
DAX Measures 3–5: Campaign ROI, Cost Per Dollar Raised, Donor Lifetime Value
dax
-- 3. CAMPAIGN ROI
-- CampaignROI = (Total Revenue - Total Cost) / Total Cost
-- Note: Campaign costs must be manually entered or imported from a cost tracking sheet
CampaignROI = 
  DIVIDE(
    [TotalRevenue] - [TotalCost],
    [TotalCost],
    0
  )

-- 4. COST PER DOLLAR RAISED
CostPerDollarRaised = TotalCost / TotalRevenue

-- 5. DONOR LIFETIME VALUE
AvgDonorLifetimeValue = 
  AVERAGEX(
    VALUES(Transactions[ConstituentId]),
    CALCULATE(SUM(Transactions[Amount]))
  )

Dashboard Pages

  • Page 1 — Donor Retention Overview: KPI Card: Current Year Retention Rate (with target indicator); Line Chart: Retention Rate by Year (5-year trend); Bar Chart: Retention Rate by Donor Segment; Table: Top 25 Lapsed Donors by Prior Year Giving
  • Page 2 — Lapsed Donor Analysis: KPI Card: Total Lapsed Donors This Year; Funnel: Donor Lifecycle (New > Active > At-Risk > Lapsed); Heat Map: Lapse Timing (months since last gift); Scatter Plot: Last Gift Amount vs. Total Lifetime Giving for Lapsed Donors; Slicer: Filter by Campaign, Giving Level, Acquisition Source
  • Page 3 — Campaign ROI Scorecard: Matrix: Campaign Name | Revenue | Cost | ROI | Cost per Dollar Raised; Column Chart: Revenue by Campaign (sorted by ROI); Line Chart: Campaign Revenue Trend by Month; KPI Cards: Total Revenue, Avg Gift Size, Donor Count, New Donor Count

Publish to Power BI Service

1
In Power BI Desktop, go to File > Publish > Publish to Power BI
2
Select workspace: '[Organization Name] Donor Analytics'
3
Configure scheduled refresh: Daily at 6:00 AM (after Bloomerang nightly export)
Note

The DAX measures above are templates — adjust the fiscal year logic if the client uses a non-calendar fiscal year (e.g., July 1 start). Campaign costs must be tracked separately (typically in a SharePoint Excel file or QuickBooks) since Bloomerang tracks revenue but not campaign expenses. The campaign cost data is essential for ROI calculation — work with the Development Director to establish a cost tracking process. Include the Fundraising Effectiveness Project (FEP) industry benchmark retention rate (currently ~45% overall, ~19% for first-time donors) as reference lines on the retention chart.

Step 9: CRM Integrations — QuickBooks, Mailchimp, and Zapier Workflows

Connect Bloomerang to QuickBooks Online for financial reconciliation, Mailchimp for email campaign tracking, and configure Zapier automations to synchronize data across all platforms. These integrations ensure campaign cost data flows into ROI calculations and email engagement data enriches donor profiles.

1
BLOOMERANG <> QUICKBOOKS INTEGRATION: In Bloomerang: Settings > Integrations > QuickBooks Online → Click 'Connect to QuickBooks' → Authorize Bloomerang to access QuickBooks (OAuth flow) → Map Bloomerang funds to QuickBooks accounts: Unrestricted Fund → QB Revenue: Contributions - Unrestricted; Program Fund → QB Revenue: Contributions - Restricted → Enable automatic sync of new transactions → Test: Enter a test donation in Bloomerang, verify it appears in QuickBooks
2
BLOOMERANG <> MAILCHIMP INTEGRATION: In Bloomerang: Settings > Integrations > Mailchimp → Click 'Connect to Mailchimp' → Authorize with Mailchimp API key → Map Bloomerang constituent lists to Mailchimp audiences → Configure sync direction: Bloomerang → Mailchimp (donor segments push to email lists) → Key segments to sync: Active Donors (gift in last 12 months), Lapsed Donors (no gift in 13-24 months), First-Time Donors (first gift in last 90 days), Major Donors (cumulative giving > $1,000)
3
ZAPIER AUTOMATION WORKFLOWS: Create the following Zaps at https://zapier.com Zap 1: New Bloomerang Transaction → Log to Google Sheets (backup) | Trigger: Bloomerang - New Transaction | Action: Google Sheets - Create Spreadsheet Row | Fields: Date, Amount, Donor Name, Campaign, Fund Zap 2: Mailchimp Campaign Sent → Update Campaign Cost Sheet | Trigger: Mailchimp - New Campaign | Action: Google Sheets - Create Row in Campaign Cost Tracker | Fields: Campaign Name, Send Date, List Size (for cost tracking) Zap 3: Lapsed Donor Alert → Microsoft Teams Notification | Trigger: Schedule (weekly) + Filter (donors with last gift > 12 months ago) | Action: Microsoft Teams - Post Message to Channel | Message: 'Weekly Lapsed Donor Alert: {count} donors at risk of lapsing'

Email Authentication (for Mailchimp Deliverability)

Add these DNS records to the client's domain:

DNS records required for Mailchimp email authentication (SPF, DKIM, DMARC)
dns
# SPF Record (TXT):
v=spf1 include:servers.mcsv.net include:spf.protection.outlook.com ~all

# DKIM Record (CNAME) - get exact values from Mailchimp:
k1._domainkey.clientdomain.org → dkim.mcsv.net

# DMARC Record (TXT):
_dmarc.clientdomain.org → v=DMARC1; p=quarantine; rua=mailto:admin@clientdomain.org
Note

The QuickBooks integration is essential for accurate campaign ROI — without it, campaign costs must be manually entered. Mailchimp segment sync enables targeted re-engagement campaigns to lapsed donors identified by the Power BI dashboards. The DNS records for email authentication are critical — without SPF/DKIM/DMARC, re-engagement emails may land in spam, undermining the entire lapsed donor recovery strategy. Test email deliverability using mail-tester.com after DNS changes propagate (24-48 hours).

Step 10: Dataro AI Predictive Analytics Integration (Phase 3)

Deploy Dataro's AI prediction platform to add machine learning-powered donor scoring on top of the Bloomerang CRM data. Dataro analyzes historical giving patterns to predict which donors are at risk of lapsing, which are likely to upgrade, and which campaigns will generate the highest ROI. This transforms the dashboards from backward-looking reports to forward-looking intelligence.

1
DATARO ACCOUNT SETUP: Contact Dataro sales at https://dataro.io/book-a-demo. Request Core platform + Bloomerang integration. Provide: organization name, Bloomerang account details, estimated record count.
2
CONNECT DATARO TO BLOOMERANG: Dataro has a pre-built Bloomerang integration. In Dataro admin console: Settings > Integrations > Bloomerang. Authenticate with Bloomerang API key. Dataro will ingest historical donor and transaction data. Initial model training typically takes 24-72 hours.
3
CONFIGURE PREDICTION MODELS: Churn/Lapse Risk Model — predicts probability (0-100%) of each donor lapsing; configure to update scores weekly. Upgrade Propensity Model — identifies donors likely to increase giving. Campaign Response Model — predicts which donors will respond to specific appeals. Optimal Ask Amount — suggests personalized donation amounts per donor.
4
PUSH DATARO SCORES TO BLOOMERANG: Enable bidirectional sync in Dataro settings. Map Dataro fields to Bloomerang custom fields: dataro_churn_score → Bloomerang 'Lapse Risk Score' (custom number field); dataro_upgrade_score → Bloomerang 'Upgrade Propensity' (custom number field); dataro_recommended_ask → Bloomerang 'Suggested Ask' (custom currency field).
5
UPDATE POWER BI DASHBOARDS: Add Dataro scores to Power BI data model. New Dashboard Page: 'AI Predictions' — heat map of donor portfolio by churn risk (Red/Yellow/Green), table of top 50 at-risk donors with recommended action, KPI for predicted revenue at risk from lapsing donors, bar chart of campaign response prediction by segment. Add conditional formatting to Lapsed Donor Analysis page — highlight donors with Dataro churn score > 70 in red.
Note

Dataro requires a minimum of 12 months of transaction history and ideally 2,000+ donor records for accurate model training. The initial model training period (24-72 hours) is normal — do not expect instant predictions. Model accuracy improves over time as more data accumulates. Schedule a training session with the Development Director specifically on interpreting Dataro scores — the most common failure mode is staff ignoring or misunderstanding AI predictions. Dataro's pricing ($499/month) is the largest ongoing cost component — ensure the client understands the value proposition (typically 10-30% improvement in campaign targeting efficiency).

Step 11: DonorSearch Wealth Screening Integration (Phase 3 Optional)

If the client's budget supports it, add DonorSearch Enhanced CORE for wealth screening and prospect research. This enriches donor records with giving capacity data, philanthropic affinity scores, and public giving history from other nonprofits — providing critical context for the AI models and major gift identification.

1
DONORSEARCH ACCOUNT SETUP: Contact DonorSearch at https://www.donorsearch.net. Request Enhanced CORE tier for the client's contact count. Pricing: ~$3,000-$8,000/year for 2,000-5,000 contacts.
2
INITIAL WEALTH SCREENING: Export constituent list from Bloomerang (Name, Address, Email). Upload to DonorSearch portal for batch screening. DonorSearch will return enriched records with: Estimated giving capacity, Philanthropic affinity score (1-5 stars), Known gifts to other nonprofits, Real estate holdings, SEC insider holdings, Business affiliations.
3
IMPORT SCREENING RESULTS TO BLOOMERANG: Download DonorSearch results as CSV. Map DonorSearch fields to Bloomerang custom fields: DS Capacity Rating → Bloomerang 'Giving Capacity' (custom dropdown), DS Affinity Score → Bloomerang 'Philanthropic Affinity' (custom number), DS Major Donor Likelihood → Bloomerang 'Major Gift Prospect' (custom checkbox). Import via Bloomerang's import wizard.
4
CONNECT DONORSEARCH TO BLOOMERANG (ongoing integration): Bloomerang has a native DonorSearch integration. In Bloomerang: Settings > Integrations > DonorSearch. Enter DonorSearch API credentials. Enable real-time lookup: staff can screen individual donors from within Bloomerang.
5
UPDATE POWER BI WITH WEALTH DATA: Add DonorSearch capacity and affinity scores to the data model. New visuals: Quadrant chart: Giving Capacity vs. Current Giving (identify underleveraged donors), Major Gift Pipeline: Prospects scored by capacity × affinity × engagement.
Note

DonorSearch wealth screening is the most expensive add-on in this stack. Recommend it for nonprofits with active major gift programs or those seeking to start one. The screening data significantly improves Dataro's predictive accuracy since it adds external signals beyond the nonprofit's own transaction history. Re-screen the database annually to capture changes in donor capacity. Some donors may find wealth screening intrusive — ensure the nonprofit's privacy policy discloses the use of third-party data enrichment.

Step 12: Security Hardening and Compliance Configuration

Implement security controls and compliance measures to protect donor PII across all platforms. This includes access controls, encryption verification, audit logging, and compliance documentation for PCI DSS 4.0, applicable state privacy laws, and nonprofit data governance.

  • MICROSOFT 365 SECURITY: Enable Security Defaults in Azure AD: Azure AD > Properties > Manage Security Defaults > Yes. This enforces MFA for all users. Configure Conditional Access if on premium tier: Require MFA for all admin sign-ins, Block legacy authentication protocols, Require compliant devices for Power BI access.
  • BLOOMERANG SECURITY: Enable Two-Factor Authentication for all users. Review user permissions quarterly. Enable audit logging (Bloomerang tracks all data changes by user). Set session timeout to 30 minutes of inactivity.
  • DATA ENCRYPTION VERIFICATION: Verify Bloomerang uses TLS 1.2+ for data in transit (it does by default). Verify Power BI data is encrypted at rest (Microsoft manages this). Ensure QuickBooks Online connection uses OAuth 2.0 (not basic auth). Verify Mailchimp API connections use HTTPS only.
  • PCI DSS 4.0 COMPLIANCE CHECK: Verify Bloomerang's PCI compliance certificate (available on request). Ensure NO credit card numbers are stored in CRM text fields or notes. Verify all payment processing goes through PCI-compliant processors. Document: the nonprofit does NOT process cards directly; all via vendor SaaS.
  • PRIVACY POLICY UPDATE: Provide client with privacy policy template addendum (see below).
  • CREATE AI GOVERNANCE POLICY: Generate and store AI governance policy document (see below).
Generate donor_privacy_policy_addendum.md
bash
cat << 'PRIVACY' > donor_privacy_policy_addendum.md
# Donor Data Privacy Policy Addendum

## Data Collection and Use
We collect donor information including name, contact details, and giving
history to manage our fundraising relationships and improve our mission impact.

## Third-Party Data Processing
We use the following third-party services to process donor data:
- Bloomerang (CRM): Stores donor records and giving history
- Microsoft Power BI: Analyzes giving trends and campaign performance
- Dataro: Provides AI-powered predictions to improve fundraising effectiveness
- DonorSearch: Enriches donor profiles with publicly available information
- Mailchimp: Sends fundraising communications

## AI and Automated Decision-Making
We use artificial intelligence tools to analyze giving patterns and predict
donor behavior. These tools help us:
- Identify donors who may be at risk of disengaging
- Suggest appropriate donation amounts based on giving history
- Optimize campaign targeting for maximum impact

No automated decisions are made without human review. Donors may opt out
of AI-based analysis by contacting [privacy contact email].

## Data Retention
Donor records are retained for [7] years after last interaction.
Donors may request data deletion by contacting [privacy contact email].

## Data Security
All donor data is encrypted in transit and at rest. Access is restricted
to authorized staff with multi-factor authentication enabled.
PRIVACY
Generate ai_governance_policy.md
bash
cat << 'AIGOV' > ai_governance_policy.md
# AI Governance Policy for [Organization Name]

## Purpose
This policy governs the use of artificial intelligence and machine learning
tools in our fundraising and donor management operations.

## Approved AI Tools
| Tool | Purpose | Data Accessed | Approved By | Date |
| Dataro | Donor prediction | Giving history, demographics | [Name] | [Date] |
| DonorSearch | Wealth screening | Public records, giving history | [Name] | [Date] |
| Power BI AI | Trend analysis | Aggregated giving data | [Name] | [Date] |

## Principles
1. Human oversight: All AI predictions are advisory; staff make final decisions
2. Transparency: Donors are informed about AI use in our privacy policy
3. Fairness: AI models are reviewed annually for bias in donor segmentation
4. Data minimization: AI tools access only the data necessary for their function
5. Opt-out: Donors may request exclusion from AI-based analysis

## Review Schedule
This policy is reviewed annually by the Board of Directors.
AIGOV
Note

76% of nonprofits lack an AI governance policy — providing one as part of the implementation is a significant value-add and differentiator for the MSP. The privacy policy addendum should be reviewed by the nonprofit's legal counsel before publication. Store all compliance documents in the client's SharePoint site. Set a calendar reminder to review access controls and compliance documentation quarterly.

Custom AI Components

Donor Retention Rate Calculator

Type: workflow A Power BI DAX calculation suite that computes year-over-year donor retention rates by segment, with industry benchmarking. Calculates overall retention, first-time donor retention, repeat donor retention, and recaptured (lapsed-then-returned) donor rates. This is the foundational metric that drives all other analytics in the solution.

Implementation:

Power BI DAX Measures for Donor Retention Analysis.
dax
# Add these measures to the Power BI data model in Power BI Desktop. Assumes
# tables: Transactions (ConstituentId, GiftDate, Amount, CampaignId) and
# Constituents (Id, FirstGiftDate)

## Measure 1: Overall Retention Rate (Year-over-Year)
Overall Retention Rate = 
VAR _CurrentYearStart = DATE(YEAR(TODAY()), 1, 1)
VAR _CurrentYearEnd = TODAY()
VAR _PriorYearStart = DATE(YEAR(TODAY()) - 1, 1, 1)
VAR _PriorYearEnd = DATE(YEAR(TODAY()) - 1, 12, 31)
VAR _PriorYearDonors = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        Transactions[GiftDate] >= _PriorYearStart,
        Transactions[GiftDate] <= _PriorYearEnd,
        ALL(Transactions)
    )
VAR _RetainedDonors = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        Transactions[GiftDate] >= _CurrentYearStart,
        Transactions[GiftDate] <= _CurrentYearEnd,
        TREATAS(_PriorYearDonors, Transactions[ConstituentId]),
        ALL(Transactions)
    )
RETURN
    DIVIDE(COUNTROWS(_RetainedDonors), COUNTROWS(_PriorYearDonors), 0)

## Measure 2: First-Time Donor Retention Rate
First-Time Donor Retention Rate = 
VAR _PriorYear = YEAR(TODAY()) - 1
VAR _CurrentYear = YEAR(TODAY())
VAR _FirstTimePriorYear = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        YEAR(Transactions[GiftDate]) = _PriorYear,
        YEAR(RELATED(Constituents[FirstGiftDate])) = _PriorYear,
        ALL(Transactions)
    )
VAR _RetainedFirstTime = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        YEAR(Transactions[GiftDate]) = _CurrentYear,
        TREATAS(_FirstTimePriorYear, Transactions[ConstituentId]),
        ALL(Transactions)
    )
RETURN
    DIVIDE(COUNTROWS(_RetainedFirstTime), COUNTROWS(_FirstTimePriorYear), 0)

## Measure 3: Repeat Donor Retention Rate
Repeat Donor Retention Rate = 
VAR _PriorYear = YEAR(TODAY()) - 1
VAR _CurrentYear = YEAR(TODAY())
VAR _RepeatPriorYear = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        YEAR(Transactions[GiftDate]) = _PriorYear,
        YEAR(RELATED(Constituents[FirstGiftDate])) < _PriorYear,
        ALL(Transactions)
    )
VAR _RetainedRepeat = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        YEAR(Transactions[GiftDate]) = _CurrentYear,
        TREATAS(_RepeatPriorYear, Transactions[ConstituentId]),
        ALL(Transactions)
    )
RETURN
    DIVIDE(COUNTROWS(_RetainedRepeat), COUNTROWS(_RepeatPriorYear), 0)

## Measure 4: Lapsed Donor Count
Lapsed Donor Count = 
VAR _PriorYear = YEAR(TODAY()) - 1
VAR _CurrentYear = YEAR(TODAY())
VAR _PriorYearDonors = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        YEAR(Transactions[GiftDate]) = _PriorYear,
        ALL(Transactions)
    )
VAR _CurrentYearDonors = 
    CALCULATETABLE(
        VALUES(Transactions[ConstituentId]),
        YEAR(Transactions[GiftDate]) = _CurrentYear,
        ALL(Transactions)
    )
RETURN
    COUNTROWS(EXCEPT(_PriorYearDonors, _CurrentYearDonors))

## Measure 5: Recaptured Donor Count
Recaptured Donors = 
VAR _CurrentYear = YEAR(TODAY())
VAR _PriorYear = YEAR(TODAY()) - 1
VAR _TwoYearsAgo = YEAR(TODAY()) - 2
VAR _GaveTwoYearsAgo = CALCULATETABLE(VALUES(Transactions[ConstituentId]), YEAR(Transactions[GiftDate]) = _TwoYearsAgo, ALL(Transactions))
VAR _DidNotGivePriorYear = EXCEPT(_GaveTwoYearsAgo, CALCULATETABLE(VALUES(Transactions[ConstituentId]), YEAR(Transactions[GiftDate]) = _PriorYear, ALL(Transactions)))
VAR _GaveCurrentYear = CALCULATETABLE(VALUES(Transactions[ConstituentId]), YEAR(Transactions[GiftDate]) = _CurrentYear, ALL(Transactions))
RETURN
    COUNTROWS(INTERSECT(_DidNotGivePriorYear, _GaveCurrentYear))

## Industry Benchmark Reference Line (for charts)
Industry Avg Retention = 0.453
Industry Avg FirstTime Retention = 0.193
Industry Avg Repeat Retention = 0.603
Source: Fundraising Effectiveness Project (FEP) 2024 benchmarks. Add these as constant lines on retention trend charts.

Lapsed Donor Risk Scoring Agent

Type: agent A Python-based scoring agent that runs weekly to analyze donor behavior patterns and assign a lapse risk score (0-100) to every active donor in the Bloomerang database. This serves as a bridge analytics component for clients who haven't yet deployed Dataro, or as a supplementary scoring mechanism. The agent uses recency, frequency, and monetary (RFM) analysis combined with engagement signals to predict lapse probability.

Implementation:

lapsed_donor_risk_scorer.py
python
# Standalone Python agent for donor lapse risk scoring. Runs weekly via
# scheduled task or Azure Function.

# lapsed_donor_risk_scorer.py
# Standalone Python agent for donor lapse risk scoring
# Runs weekly via scheduled task or Azure Function
# Requires: pip install pandas numpy requests

import pandas as pd
import numpy as np
import requests
from datetime import datetime, timedelta
import json
import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')

# Configuration
BLOOMERANG_API_KEY = 'YOUR_BLOOMERANG_API_KEY'
BLOOMERANG_BASE_URL = 'https://crm.bloomerang.co/api/v2'
HEADERS = {'X-API-KEY': BLOOMERANG_API_KEY, 'Content-Type': 'application/json'}

# Scoring weights (tunable per client)
WEIGHTS = {
    'recency': 0.35,      # How recently they gave
    'frequency': 0.25,    # How often they give
    'monetary': 0.10,     # Average gift size relative to their history
    'tenure': 0.10,       # How long they've been a donor
    'trend': 0.20         # Is their giving increasing or decreasing?
}

def fetch_bloomerang_data():
    """Fetch all constituents and transactions from Bloomerang API."""
    logging.info('Fetching data from Bloomerang...')
    
    constituents = []
    skip = 0
    while True:
        resp = requests.get(f'{BLOOMERANG_BASE_URL}/constituents?skip={skip}&take=50', headers=HEADERS)
        data = resp.json()
        if not data.get('Results'):
            break
        constituents.extend(data['Results'])
        skip += 50
        if skip >= data.get('Total', 0):
            break
    
    transactions = []
    skip = 0
    min_date = (datetime.now() - timedelta(days=5*365)).strftime('%Y-%m-%d')
    while True:
        resp = requests.get(
            f'{BLOOMERANG_BASE_URL}/transactions?skip={skip}&take=50&minDate={min_date}',
            headers=HEADERS
        )
        data = resp.json()
        if not data.get('Results'):
            break
        transactions.extend(data['Results'])
        skip += 50
        if skip >= data.get('Total', 0):
            break
    
    logging.info(f'Fetched {len(constituents)} constituents, {len(transactions)} transactions')
    return pd.json_normalize(constituents), pd.json_normalize(transactions)

def calculate_rfm_scores(transactions_df):
    """Calculate RFM scores for each donor."""
    now = datetime.now()
    transactions_df['Date'] = pd.to_datetime(transactions_df['Date'])
    
    rfm = transactions_df.groupby('AccountId').agg({
        'Date': lambda x: (now - x.max()).days,  # Recency (days since last gift)
        'Id': 'count',                            # Frequency (number of gifts)
        'Amount': ['mean', 'sum']                  # Monetary (avg and total)
    }).reset_index()
    
    rfm.columns = ['ConstituentId', 'Recency_Days', 'Frequency', 'Avg_Gift', 'Total_Giving']
    
    # Calculate giving trend (compare last 12 months to prior 12 months)
    twelve_months_ago = now - timedelta(days=365)
    twenty_four_months_ago = now - timedelta(days=730)
    
    recent = transactions_df[transactions_df['Date'] >= twelve_months_ago].groupby('AccountId')['Amount'].sum().reset_index()
    recent.columns = ['ConstituentId', 'Recent_12m']
    
    prior = transactions_df[
        (transactions_df['Date'] >= twenty_four_months_ago) & 
        (transactions_df['Date'] < twelve_months_ago)
    ].groupby('AccountId')['Amount'].sum().reset_index()
    prior.columns = ['ConstituentId', 'Prior_12m']
    
    rfm = rfm.merge(recent, on='ConstituentId', how='left').fillna({'Recent_12m': 0})
    rfm = rfm.merge(prior, on='ConstituentId', how='left').fillna({'Prior_12m': 0})
    
    # First gift date for tenure calculation
    first_gift = transactions_df.groupby('AccountId')['Date'].min().reset_index()
    first_gift.columns = ['ConstituentId', 'First_Gift_Date']
    rfm = rfm.merge(first_gift, on='ConstituentId', how='left')
    rfm['Tenure_Days'] = (now - rfm['First_Gift_Date']).dt.days
    
    return rfm

def score_donors(rfm_df):
    """Assign lapse risk scores (0-100, higher = more at risk)."""
    scores = pd.DataFrame()
    scores['ConstituentId'] = rfm_df['ConstituentId']
    
    # Recency Score (0-100): More days since last gift = higher risk
    # 0-90 days = low risk, 91-180 = moderate, 181-365 = high, 365+ = very high
    scores['recency_score'] = np.clip(rfm_df['Recency_Days'] / 365 * 100, 0, 100)
    
    # Frequency Score (0-100): Fewer gifts = higher risk (inverse)
    max_freq = rfm_df['Frequency'].quantile(0.95)  # Cap at 95th percentile
    scores['frequency_score'] = 100 - np.clip(rfm_df['Frequency'] / max_freq * 100, 0, 100)
    
    # Monetary Score (0-100): Lower avg gift relative to capacity = moderate risk
    # This is less predictive for lapse but contributes to overall picture
    avg_overall = rfm_df['Avg_Gift'].median()
    scores['monetary_score'] = np.clip(50 - (rfm_df['Avg_Gift'] - avg_overall) / avg_overall * 25, 0, 100)
    
    # Tenure Score (0-100): Newer donors are at higher risk of lapsing
    scores['tenure_score'] = np.clip(100 - rfm_df['Tenure_Days'] / (3*365) * 100, 0, 100)
    
    # Trend Score (0-100): Declining giving = higher risk
    giving_change = np.where(
        rfm_df['Prior_12m'] > 0,
        (rfm_df['Recent_12m'] - rfm_df['Prior_12m']) / rfm_df['Prior_12m'],
        np.where(rfm_df['Recent_12m'] > 0, -0.5, 0)  # New donor with giving = moderate risk
    )
    scores['trend_score'] = np.clip(50 - giving_change * 50, 0, 100)
    
    # Weighted composite score
    scores['lapse_risk_score'] = (
        scores['recency_score'] * WEIGHTS['recency'] +
        scores['frequency_score'] * WEIGHTS['frequency'] +
        scores['monetary_score'] * WEIGHTS['monetary'] +
        scores['tenure_score'] * WEIGHTS['tenure'] +
        scores['trend_score'] * WEIGHTS['trend']
    ).round(0).astype(int)
    
    # Risk category
    scores['risk_category'] = pd.cut(
        scores['lapse_risk_score'],
        bins=[0, 25, 50, 75, 100],
        labels=['Low', 'Moderate', 'High', 'Critical'],
        include_lowest=True
    )
    
    return scores

def push_scores_to_bloomerang(scores_df):
    """Update Bloomerang custom fields with lapse risk scores."""
    logging.info(f'Pushing {len(scores_df)} scores to Bloomerang...')
    updated = 0
    errors = 0
    
    for _, row in scores_df.iterrows():
        try:
            # Update custom field via Bloomerang API
            payload = {
                'CustomFields': [
                    {
                        'FieldId': 'LAPSE_RISK_SCORE_FIELD_ID',  # Replace with actual field ID
                        'Value': str(int(row['lapse_risk_score']))
                    }
                ]
            }
            resp = requests.put(
                f"{BLOOMERANG_BASE_URL}/constituent/{row['ConstituentId']}",
                headers=HEADERS,
                json=payload
            )
            if resp.status_code == 200:
                updated += 1
            else:
                errors += 1
                logging.warning(f"Failed to update {row['ConstituentId']}: {resp.status_code}")
        except Exception as e:
            errors += 1
            logging.error(f"Error updating {row['ConstituentId']}: {e}")
    
    logging.info(f'Scores updated: {updated}, Errors: {errors}')

def generate_alert_report(scores_df, rfm_df):
    """Generate a weekly alert report of high-risk donors."""
    high_risk = scores_df[scores_df['lapse_risk_score'] >= 70].merge(
        rfm_df[['ConstituentId', 'Recency_Days', 'Total_Giving', 'Avg_Gift', 'Recent_12m']],
        on='ConstituentId'
    ).sort_values('lapse_risk_score', ascending=False)
    
    report = f"""WEEKLY LAPSE RISK ALERT - {datetime.now().strftime('%Y-%m-%d')}
{'='*60}
Total donors scored: {len(scores_df)}
Critical risk (75-100): {len(scores_df[scores_df['risk_category'] == 'Critical'])}
High risk (50-75): {len(scores_df[scores_df['risk_category'] == 'High'])}
Moderate risk (25-50): {len(scores_df[scores_df['risk_category'] == 'Moderate'])}
Low risk (0-25): {len(scores_df[scores_df['risk_category'] == 'Low'])}

TOP 20 AT-RISK DONORS (by total lifetime giving):
{'-'*60}
"""
    
    top_at_risk = high_risk.nlargest(20, 'Total_Giving')
    for _, donor in top_at_risk.iterrows():
        report += f"ID: {donor['ConstituentId']} | Risk: {donor['lapse_risk_score']} | "
        report += f"Days Since Gift: {donor['Recency_Days']} | "
        report += f"Lifetime: ${donor['Total_Giving']:,.0f} | "
        report += f"Last 12m: ${donor['Recent_12m']:,.0f}\n"
    
    return report, high_risk

if __name__ == '__main__':
    logging.info('Starting weekly lapse risk scoring...')
    
    # Fetch data
    constituents_df, transactions_df = fetch_bloomerang_data()
    
    # Calculate RFM
    rfm_df = calculate_rfm_scores(transactions_df)
    
    # Score donors
    scores_df = score_donors(rfm_df)
    
    # Push to Bloomerang
    push_scores_to_bloomerang(scores_df)
    
    # Generate alert report
    report, high_risk_df = generate_alert_report(scores_df, rfm_df)
    
    # Save report
    with open(f"lapse_risk_report_{datetime.now().strftime('%Y%m%d')}.txt", 'w') as f:
        f.write(report)
    
    # Save detailed data for Power BI
    scores_df.merge(rfm_df, on='ConstituentId').to_csv(
        f"lapse_risk_scores_{datetime.now().strftime('%Y%m%d')}.csv", index=False
    )
    
    logging.info('Scoring complete.')
    print(report)

Campaign ROI Calculator

Type: workflow A comprehensive Power BI data model and DAX measure set that calculates true campaign ROI by combining donation revenue from Bloomerang with campaign cost data from a SharePoint-hosted cost tracking spreadsheet. Computes cost per dollar raised, net revenue, ROI percentage, cost per donor acquired, and donor lifetime value by acquisition campaign. Enables the client to compare campaign effectiveness and make data-driven budget allocation decisions.

Implementation:

This component requires two data sources: 1. Bloomerang transactions (donor gifts tagged by campaign) 2. Campaign Cost Tracker (SharePoint Excel file maintained by staff)

Campaign Cost Tracker Template — Create in SharePoint - File: Campaign_Costs.xlsx - Sheet: CampaignCosts

Campaign Cost Tracker — example rows for Campaign_Costs.xlsx
text
| CampaignId | CampaignName | FiscalYear | DirectMailCost | PrintingCost | PostageCost | DigitalAdsCost | EventCost | StaffHours | StaffHourlyRate | PlatformFees | OtherCosts | Notes |
|------------|--------------|------------|----------------|-------------|-------------|----------------|-----------|------------|-----------------|--------------|------------|-------|
| ANNUAL-2024 | Annual Appeal 2024 | 2024 | 2500 | 1800 | 3200 | 500 | 0 | 40 | 35 | 150 | 200 | Includes list rental |
| GALA-2024 | Annual Gala 2024 | 2024 | 500 | 800 | 200 | 1500 | 12000 | 120 | 35 | 500 | 3000 | Venue + catering |
| ONLINE-GEN | Online General | 2024 | 0 | 0 | 0 | 2000 | 0 | 20 | 35 | 300 | 0 | Google Ads grant |
1
Import Bloomerang transactions table into Power BI
2
Import Campaign_Costs.xlsx from SharePoint (auto-refresh)
3
Create relationship: Transactions[CampaignName] -> CampaignCosts[CampaignName]
DAX Measure: Total Campaign Cost
dax
Total Campaign Cost = 
    SUM(CampaignCosts[DirectMailCost]) +
    SUM(CampaignCosts[PrintingCost]) +
    SUM(CampaignCosts[PostageCost]) +
    SUM(CampaignCosts[DigitalAdsCost]) +
    SUM(CampaignCosts[EventCost]) +
    SUMX(CampaignCosts, CampaignCosts[StaffHours] * CampaignCosts[StaffHourlyRate]) +
    SUM(CampaignCosts[PlatformFees]) +
    SUM(CampaignCosts[OtherCosts])
DAX Measure: Total Campaign Revenue
dax
Total Campaign Revenue = SUM(Transactions[Amount])
DAX Measure: Net Campaign Revenue
dax
Net Campaign Revenue = [Total Campaign Revenue] - [Total Campaign Cost]
DAX Measure: Campaign ROI Percentage
dax
Campaign ROI % = 
    DIVIDE(
        [Total Campaign Revenue] - [Total Campaign Cost],
        [Total Campaign Cost],
        0
    )
DAX Measure: Cost Per Dollar Raised
dax
Cost Per Dollar Raised = 
    DIVIDE([Total Campaign Cost], [Total Campaign Revenue], 0)
DAX Measure: Unique Donors Per Campaign
dax
Unique Donors = DISTINCTCOUNT(Transactions[ConstituentId])
DAX Measure: New Donors Per Campaign
dax
New Donors Acquired = 
    COUNTROWS(
        FILTER(
            VALUES(Transactions[ConstituentId]),
            CALCULATE(
                MIN(Transactions[GiftDate]),
                ALL(Transactions[CampaignName])
            ) >= MIN(Transactions[GiftDate])
        )
    )
DAX Measure: Cost Per New Donor Acquired
dax
Cost Per New Donor = DIVIDE([Total Campaign Cost], [New Donors Acquired], 0)
DAX Measure: Average Gift Size
dax
Average Gift = AVERAGE(Transactions[Amount])
DAX Measure: Revenue Per Donor
dax
Revenue Per Donor = DIVIDE([Total Campaign Revenue], [Unique Donors], 0)

Dashboard Page Layout: Campaign ROI Scorecard

  • Top Row — KPI Cards: Total Revenue (all campaigns) | Total Cost | Overall ROI % | Cost per Dollar
  • Middle — Campaign Comparison Matrix (table visual). Columns: Campaign Name | Revenue | Cost | Net Revenue | ROI % | Cost/Dollar | Donors | New Donors | Cost/New Donor | Avg Gift. Conditional formatting: Green for ROI > 300%, Yellow for 100–300%, Red for < 100%
  • Bottom Left — Horizontal Bar Chart: Net Revenue by Campaign (sorted descending)
  • Bottom Right — Scatter Plot: Campaign Cost (X) vs Revenue (Y) with size = Donor Count. Reference line at 1:1 ratio (breakeven)

Scheduled Refresh Configuration

  • Data sources: SharePoint Excel (Campaign_Costs.xlsx) + Bloomerang CSV export
  • Refresh schedule: Daily at 7:00 AM
  • Alert: Configure Power BI data alert when any campaign drops below 100% ROI

Lapsed Donor Re-engagement Email Workflow

Type: integration An automated Zapier workflow that monitors Bloomerang for donors whose last gift date exceeds configurable thresholds and triggers personalized re-engagement email sequences through Mailchimp. Uses tiered messaging: 9-month gentle nudge, 12-month warm re-engagement, and 18-month final appeal with impact story. This closes the loop between the analytics dashboards (which identify lapsed donors) and the outbound re-engagement action.

Implementation:

Workflow Architecture

1
Trigger: Schedule (every Monday at 9:00 AM)
2
Step 1: Bloomerang API - Fetch donors with last gift 9-18 months ago
3
Step 2: Filter and categorize by lapse tier
4
Step 3: Add/update contacts in Mailchimp with appropriate tags
5
Step 4: Mailchimp triggers the appropriate automated email sequence
6
Step 5: Log action to Google Sheets for tracking

Zap 1: 9-Month Gentle Nudge

  • Trigger: Schedule by Zapier (Every Monday, 9:00 AM)
  • Action 1: Webhooks by Zapier (GET) — URL: https://crm.bloomerang.co/api/v2/constituents?lastModified=2024-01-01&take=50 | Headers: X-API-KEY: {{bloomerang_api_key}}
  • Action 2: Filter by Zapier — Only continue if: Last Transaction Date is between 240-300 days ago
  • Action 3: Mailchimp - Add/Update Subscriber — List: Lapsed Donor Re-engagement | Tag: '9-month-nudge' | Merge Fields: FNAME, LNAME, LASTGIFT (amount), LASTDATE
  • Action 4: Google Sheets - Create Row — Spreadsheet: Lapsed Donor Re-engagement Log | Columns: Date, Donor Name, Email, Last Gift Date, Last Gift Amount, Tier, Action

Zap 2: 12-Month Warm Re-engagement

  • Same structure as Zap 1
  • Filter: Last Transaction Date is between 330-400 days ago
  • Tag: '12-month-reengage'

Zap 3: 18-Month Final Appeal

  • Same structure as Zap 1
  • Filter: Last Transaction Date is between 510-570 days ago
  • Tag: '18-month-final'

Mailchimp Automation Sequences

Sequence 1: 9-Month Gentle Nudge (Tag trigger: '9-month-nudge')

  • Email 1 (Day 0) — Subject: 'We miss you, {{FNAME}}!' | Body: Brief update on organizational impact since their last gift. Soft ask. | CTA: 'See what your support made possible' → Impact page
  • Email 2 (Day 7) — Subject: 'Your impact story' | Body: Specific beneficiary story with data. Include their last gift amount as reference. | CTA: 'Continue making a difference' → Donation page with suggested amount = last gift

Sequence 2: 12-Month Warm Re-engagement (Tag trigger: '12-month-reengage')

  • Email 1 (Day 0) — Subject: '{{FNAME}}, it's been a while' | Body: Acknowledges time away. Major program update. Ask for feedback. | CTA: 'Tell us what matters to you' → Survey link
  • Email 2 (Day 5) — Subject: 'Here's what we've accomplished together' | Body: Annual impact report summary. Clear ask with specific need. | CTA: 'Rejoin our community of supporters' → Donation page
  • Email 3 (Day 12) — Subject: 'A personal note from {{ED_NAME}}' | Body: Personal message from Executive Director. Emotional appeal with urgency. | CTA: 'Make your gift today' → Donation page with lower suggested amount (50% of last gift)

Sequence 3: 18-Month Final Appeal (Tag trigger: '18-month-final')

  • Email 1 (Day 0) — Subject: 'Before you go, {{FNAME}}' | Body: Honest, direct message. 'We notice you haven't given in a while.' Final impact story. | CTA: 'Give one more time' → Donation page with lowest barrier ($25 suggested)
  • Email 2 (Day 7) — Subject: 'Stay connected even if you can't give' | Body: Offer alternatives: volunteer, share on social media, attend event. Remove donation pressure. | CTA: 'Choose how to stay involved' → Engagement options page | Also includes: Mailchimp unsubscribe link and 'Update your preferences' option

Tracking & Measurement

  • Total lapsed donors contacted per tier per week
  • Email open rates (sync from Mailchimp via Zapier)
  • Click-through rates
  • Reactivation rate (donors who give again within 60 days of sequence start)
  • Revenue recovered (total gifts from reactivated donors)
Note

This data feeds into the Power BI Campaign ROI dashboard for measuring re-engagement campaign effectiveness.

Bloomerang API call used in Zapier Webhooks action to fetch recently modified constituents
http
GET https://crm.bloomerang.co/api/v2/constituents?lastModified=2024-01-01&take=50
X-API-KEY: {{bloomerang_api_key}}

Executive Donor Intelligence Summary Prompt

Type: prompt A structured prompt template for generating monthly executive-level donor intelligence briefings using Microsoft Copilot or ChatGPT. The nonprofit's Development Director or MSP analyst uses this prompt with data exported from Power BI to produce a board-ready summary of donor retention trends, lapsed donor analysis, campaign performance, and AI-generated recommendations. This bridges the gap between raw dashboard data and actionable narrative for non-technical stakeholders.

Implementation:

Monthly Donor Intelligence Briefing Generator

# Monthly Donor Intelligence Briefing Generator # Use with: Microsoft Copilot, ChatGPT-4, or Claude # Input: Export key metrics from Power BI dashboards as text or paste CSV summary ## SYSTEM PROMPT (set as custom instructions or system message): You are a senior nonprofit fundraising analyst preparing a monthly donor intelligence briefing for the Board of Directors and Executive Director of a 501(c)(3) nonprofit organization. Your analysis should be: - Data-driven but accessible to non-technical board members - Action-oriented with specific recommendations - Honest about challenges while highlighting opportunities - Benchmarked against industry standards (FEP/AFP benchmarks) Industry benchmarks to reference: - Overall donor retention rate: ~45% - First-time donor retention rate: ~19.3% - Repeat donor retention rate: ~60% - Average cost per dollar raised (direct mail): $0.20-$0.25 - Average cost per dollar raised (online): $0.05-$0.10 - Average cost per dollar raised (events): $0.50+ ## USER PROMPT TEMPLATE: Please generate a Monthly Donor Intelligence Briefing for [MONTH/YEAR] using the following data from our analytics platform: ### RETENTION METRICS: - Overall retention rate (YTD): [X]% - First-time donor retention rate: [X]% - Repeat donor retention rate: [X]% - Total active donors: [X] - Total lapsed donors (no gift in 12+ months): [X] - Recaptured donors (previously lapsed, gave this month): [X] ### LAPSED DONOR ANALYSIS: - Donors lapsed this month: [X] - Total revenue at risk from lapsed donors: $[X] - Average lifetime value of lapsed donors: $[X] - Top 5 lapsed donors by lifetime giving: 1. [Name] - $[lifetime] - Last gift: [date] 2. [Name] - $[lifetime] - Last gift: [date] (etc.) - Lapse risk distribution: [X] Critical / [X] High / [X] Moderate / [X] Low ### CAMPAIGN PERFORMANCE (this month): | Campaign | Revenue | Cost | ROI % | New Donors | Cost/Dollar | | [name] | $[X] | $[X] | [X]% | [X] | $[X] | (repeat for each campaign) ### AI PREDICTION HIGHLIGHTS (if Dataro is deployed): - Donors predicted to lapse next 90 days: [X] - Predicted revenue at risk: $[X] - Top upgrade candidates identified: [X] - Predicted campaign response rate for next appeal: [X]% ### ADDITIONAL CONTEXT: [Any relevant notes: upcoming campaign, staffing changes, economic factors, etc.] --- Please produce a briefing with these sections: 1. **Executive Summary** (3-4 sentences: headline metrics and overall trajectory) 2. **Retention Health Check** (paragraph analyzing retention vs. benchmarks, trend direction) 3. **Lapsed Donor Spotlight** (analysis of who is lapsing and why, with recovery recommendations) 4. **Campaign Performance Review** (which campaigns are working, which aren't, why) 5. **AI Insights & Predictions** (what the models are telling us about the next 90 days) 6. **Recommended Actions** (numbered list of 3-5 specific, actionable steps for the next month) 7. **Key Metrics Dashboard** (formatted summary table of all KPIs for board packet) Tone: Professional but warm. This is a mission-driven organization. Connect data to impact. Length: 1.5-2 pages when printed. Format: Use headers, bullet points, and bold for emphasis. Include a 1-sentence 'Bottom Line' at the very top.
Sonnet 4.6

Donor Data Quality Monitor

Type: workflow An automated data quality monitoring workflow that runs daily to detect data integrity issues in the Bloomerang CRM before they corrupt analytics. Checks for duplicate records, missing required fields, inconsistent campaign codes, anomalous gift amounts, and stale records. Sends a weekly quality scorecard to the MSP and client administrator with issues ranked by severity.

Implementation:

donor_data_quality_monitor.py
python
# Automated data quality monitoring for Bloomerang CRM. Schedule daily via
# Windows Task Scheduler, cron, or Azure Function.

# donor_data_quality_monitor.py
# Automated data quality monitoring for Bloomerang CRM
# Schedule: Daily via Windows Task Scheduler, cron, or Azure Function
# Sends weekly summary email every Monday

import pandas as pd
import numpy as np
import requests
from datetime import datetime, timedelta
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
import json
import logging

logging.basicConfig(level=logging.INFO)

# Configuration
BLOOMERANG_API_KEY = 'YOUR_API_KEY'
BASE_URL = 'https://crm.bloomerang.co/api/v2'
HEADERS = {'X-API-KEY': BLOOMERANG_API_KEY}

# Alert recipients
MSP_EMAIL = 'alerts@yourmsp.com'
CLIENT_ADMIN_EMAIL = 'dbadmin@clientnonprofit.org'
SMTP_SERVER = 'smtp.office365.com'
SMTP_PORT = 587
SMTP_USER = 'alerts@yourmsp.com'
SMTP_PASS = 'YOUR_APP_PASSWORD'  # Use app password with MFA

# Quality thresholds
THRESHOLDS = {
    'duplicate_rate_warning': 0.02,      # >2% duplicates = warning
    'duplicate_rate_critical': 0.05,     # >5% duplicates = critical
    'missing_email_warning': 0.15,       # >15% missing emails = warning
    'missing_email_critical': 0.30,      # >30% = critical
    'anomalous_gift_min': 0.01,          # Gifts under $0.01 are suspicious
    'anomalous_gift_max': 1000000,       # Gifts over $1M need verification
    'stale_record_days': 730,            # Records not updated in 2 years
}

def fetch_data():
    """Fetch constituent and transaction data."""
    constituents = []
    skip = 0
    while True:
        resp = requests.get(f'{BASE_URL}/constituents?skip={skip}&take=50', headers=HEADERS)
        data = resp.json()
        if not data.get('Results'):
            break
        constituents.extend(data['Results'])
        skip += 50
        if skip >= data.get('Total', 0):
            break
    return pd.json_normalize(constituents)

def check_duplicates(df):
    """Check for potential duplicate records."""
    issues = []
    if 'PrimaryEmail.Value' in df.columns:
        email_dupes = df[df['PrimaryEmail.Value'].notna()].duplicated(
            subset=['PrimaryEmail.Value'], keep=False
        ).sum()
        dupe_rate = email_dupes / len(df) if len(df) > 0 else 0
        severity = 'CRITICAL' if dupe_rate > THRESHOLDS['duplicate_rate_critical'] else \
                   'WARNING' if dupe_rate > THRESHOLDS['duplicate_rate_warning'] else 'OK'
        issues.append({
            'check': 'Email Duplicates',
            'count': int(email_dupes),
            'rate': f'{dupe_rate:.1%}',
            'severity': severity
        })
    
    if 'FirstName' in df.columns and 'LastName' in df.columns:
        name_dupes = df.duplicated(subset=['FirstName', 'LastName'], keep=False).sum()
        issues.append({
            'check': 'Name Duplicates (potential)',
            'count': int(name_dupes),
            'rate': f'{name_dupes/len(df):.1%}' if len(df) > 0 else '0%',
            'severity': 'INFO'  # Name dupes are common (John Smith)
        })
    return issues

def check_completeness(df):
    """Check for missing required fields."""
    issues = []
    required_fields = {
        'PrimaryEmail.Value': ('Email', THRESHOLDS['missing_email_warning']),
        'PrimaryAddress.Street': ('Street Address', 0.25),
        'FirstName': ('First Name', 0.01),
        'LastName': ('Last Name', 0.01),
    }
    
    for field, (label, threshold) in required_fields.items():
        if field in df.columns:
            missing = df[field].isna().sum() + (df[field] == '').sum()
            rate = missing / len(df) if len(df) > 0 else 0
            severity = 'WARNING' if rate > threshold else 'OK'
            issues.append({
                'check': f'Missing {label}',
                'count': int(missing),
                'rate': f'{rate:.1%}',
                'severity': severity
            })
    return issues

def generate_quality_score(all_issues):
    """Calculate overall data quality score (0-100)."""
    penalties = {'CRITICAL': 20, 'WARNING': 5, 'INFO': 0, 'OK': 0}
    total_penalty = sum(penalties.get(i['severity'], 0) for i in all_issues)
    return max(0, 100 - total_penalty)

def send_report(issues, quality_score):
    """Send weekly quality report via email."""
    report_lines = [f'DONOR DATA QUALITY REPORT - {datetime.now().strftime("%Y-%m-%d")}']
    report_lines.append(f'Overall Quality Score: {quality_score}/100')
    report_lines.append('=' * 50)
    
    for issue in sorted(issues, key=lambda x: {'CRITICAL': 0, 'WARNING': 1, 'INFO': 2, 'OK': 3}[x['severity']]):
        report_lines.append(f"[{issue['severity']}] {issue['check']}: {issue['count']} records ({issue['rate']})")
    
    body = '\n'.join(report_lines)
    
    msg = MIMEMultipart()
    msg['From'] = SMTP_USER
    msg['To'] = f'{MSP_EMAIL}, {CLIENT_ADMIN_EMAIL}'
    msg['Subject'] = f'[{"ALERT" if quality_score < 80 else "OK"}] Donor Data Quality: {quality_score}/100'
    msg.attach(MIMEText(body, 'plain'))
    
    try:
        with smtplib.SMTP(SMTP_SERVER, SMTP_PORT) as server:
            server.starttls()
            server.login(SMTP_USER, SMTP_PASS)
            server.send_message(msg)
        logging.info('Quality report sent successfully')
    except Exception as e:
        logging.error(f'Failed to send report: {e}')
    
    return body

if __name__ == '__main__':
    df = fetch_data()
    all_issues = []
    all_issues.extend(check_duplicates(df))
    all_issues.extend(check_completeness(df))
    quality_score = generate_quality_score(all_issues)
    report = send_report(all_issues, quality_score)
    print(report)

    # Save daily snapshot for trend tracking
    snapshot = {'date': datetime.now().isoformat(), 'score': quality_score, 'issues': all_issues}
    with open(f"quality_snapshot_{datetime.now().strftime('%Y%m%d')}.json", 'w') as f:
        json.dump(snapshot, f)

Testing & Validation

  • BLOOMERANG DATA INTEGRITY TEST: Compare the total number of constituent records in Bloomerang (Database > Summary) against the cleaned source CSV row count. They should match within 1% (allowing for legitimate merges during import). If discrepancy exceeds 2%, investigate the import log for skipped records.
  • BLOOMERANG GIFT TOTAL VERIFICATION: Run the Bloomerang built-in report 'All Transactions' for the full date range and verify the total gift amount matches the sum from the cleaned source data CSV. Discrepancy should be $0.00 — any difference indicates a data import error.
  • RETENTION RATE CALCULATION VALIDATION: Manually calculate the prior-year-to-current-year retention rate using the raw transaction data in Excel: count unique donors who gave in both 2023 and 2024, divide by unique donors who gave in 2023. Compare this number against the Power BI 'Overall Retention Rate' DAX measure. They should match exactly.
  • LAPSED DONOR COUNT VALIDATION: In Bloomerang, create a custom report filtering for constituents whose last gift date is between 12 and 24 months ago. Compare this count against the Power BI 'Lapsed Donor Count' measure. Verify that the top 10 lapsed donors by lifetime giving appear in both the Bloomerang report and the Power BI dashboard.
  • CAMPAIGN ROI CALCULATION TEST: Select one campaign (e.g., Annual Appeal 2024). Manually sum all gifts tagged to that campaign in Bloomerang. Manually sum all costs for that campaign from the Campaign Cost Tracker spreadsheet. Calculate ROI = (Revenue - Cost) / Cost. Compare against the Power BI Campaign ROI % measure. Results should match.
  • QUICKBOOKS INTEGRATION TEST: Enter a test donation of $123.45 in Bloomerang tagged to a specific campaign and fund. Wait 15 minutes for sync. Verify the transaction appears in QuickBooks Online in the correct revenue account with the correct amount and date. Then void/delete the test transaction in both systems.
  • MAILCHIMP SEGMENT SYNC TEST: In Bloomerang, identify 5 donors tagged as 'Active' and 5 tagged as 'Lapsed'. Verify that these same contacts appear in the correct Mailchimp audience segments (Active Donors list and Lapsed Donors list respectively). Check that email addresses match exactly.
  • ZAPIER WORKFLOW TEST: Trigger each Zapier workflow manually. For the lapsed donor alert Zap, verify that a Microsoft Teams notification appears in the designated channel with the correct donor count. For the Google Sheets logging Zap, verify a new row is created with accurate data.
  • POWER BI DASHBOARD LOAD TEST: Open the Power BI dashboard in a web browser on a client workstation. Verify all three pages load within 10 seconds on the client's internet connection. Test all slicers and filters (date range, campaign, donor segment) to ensure they filter correctly. Verify drill-through from the retention page to individual donor details works.
  • DATARO PREDICTION VALIDATION (Phase 3): After Dataro's initial model training completes, export the churn risk scores for all donors. Verify that donors who actually lapsed in the prior year received high churn scores (>70) from the model. Calculate the model's accuracy: what percentage of actual lapsed donors were correctly identified as high-risk? Target: >60% of actual lapsed donors should have been scored as High or Critical risk.
  • EMAIL DELIVERABILITY TEST: Send a test email from Mailchimp to 5 different email providers (Gmail, Outlook, Yahoo, Apple Mail, organizational Exchange). Verify delivery to inbox (not spam/junk). Run the domain through mail-tester.com and verify a score of 8/10 or higher. If score is below 8, review SPF/DKIM/DMARC DNS records.
  • DATA QUALITY MONITOR TEST: Run the donor_data_quality_monitor.py script manually. Verify it completes without errors, generates a quality score, and sends the email report to both MSP and client admin addresses. Introduce a deliberate duplicate record in the test data and verify the monitor detects and reports it.
  • SECURITY AUDIT: Verify MFA is enabled for all Bloomerang user accounts. Verify MFA is enabled for all Microsoft 365 accounts. Attempt to access the Power BI workspace with a non-authorized account and confirm access is denied. Verify that the Bloomerang audit log captures a record of the test login and data access.
  • END-TO-END WORKFLOW TEST: Enter a new donation in Bloomerang. Verify it flows to QuickBooks (15 min). Verify the Power BI dataset refresh captures it (next scheduled refresh). Verify the donor's engagement score updates in Bloomerang. This tests the full data pipeline from entry to analytics.

Client Handoff

The client handoff session should be conducted as a 2-hour in-person or video meeting with the Development Director, Database Manager, Executive Director, and any staff who will use the dashboards regularly. Cover the following topics in order:

1
System Tour (30 minutes): Walk through each platform — Bloomerang CRM, Power BI dashboards, Mailchimp integration, and QuickBooks sync. Show how they connect and where data flows between them. Demonstrate daily workflows: entering a donation, running a donor report, checking the retention dashboard.
2
Dashboard Training (30 minutes): Teach staff to read and interpret each Power BI dashboard page. Cover: what retention rate means and why it matters, how to identify at-risk donors using the lapsed donor analysis, how to read campaign ROI and what constitutes a good vs. poor performing campaign. Practice using slicers and filters. Show how to export data from Power BI for board reports.
3
AI Tools Training (15 minutes, Phase 3): If Dataro is deployed, explain what the lapse risk scores mean (0-100 scale), how to use Dataro's recommended ask amounts, and how to interpret campaign response predictions. Emphasize that AI predictions are advisory — human judgment should always be applied.
4
Data Hygiene Responsibilities (15 minutes): Train the Database Manager on: entering donations consistently (always tag campaign and fund), running the monthly duplicate check in Bloomerang, maintaining the Campaign Cost Tracker spreadsheet in SharePoint (essential for ROI calculations), and responding to data quality alert emails from the monitoring script.
5
Troubleshooting Guide (10 minutes): Cover common issues: Power BI dashboard not refreshing (check scheduled refresh settings), Mailchimp segments not syncing (verify integration connection), QuickBooks transactions not appearing (check fund-to-account mapping). Provide the MSP support contact information and SLA details.
6
Documentation Delivery (10 minutes): Leave behind the following documents in the client's SharePoint site: (a) Campaign Taxonomy document with approved codes, (b) Donor Privacy Policy Addendum, (c) AI Governance Policy, (d) System Architecture Diagram showing all platforms and integrations, (e) Quick Reference Card with login URLs and key contacts, (f) Data Quality Monitoring schedule and alert thresholds, (g) Power BI DAX measure documentation for future modifications.
7
Success Criteria Review (10 minutes): Review the agreed-upon success metrics: target retention rate improvement (e.g., 5% increase within 12 months), number of lapsed donors re-engaged per quarter, campaign ROI visibility (all campaigns tracked with costs), and data quality score maintained above 85/100. Schedule the 30-day check-in and 90-day review meetings before leaving.

Maintenance

  • Monthly Maintenance Tasks (2-4 hours/month):
  • Review Power BI dashboard accuracy: spot-check 5 random donor records against Bloomerang to ensure data sync is working correctly
  • Review and resolve any data quality alerts from the automated monitoring script (target: maintain quality score above 85/100)
  • Run Bloomerang duplicate detection and merge confirmed duplicates (with client approval)
  • Verify all Zapier workflows are running successfully (check Zapier task history for failures)
  • Update Campaign Cost Tracker spreadsheet with new campaign cost data (coordinate with Development Director)
  • Review Bloomerang user access: disable accounts for departed staff, adjust permissions as needed
  • Quarterly Maintenance Tasks (4-6 hours/quarter):
  • Power BI dashboard review and optimization: add/modify visuals based on client feedback, update DAX measures if campaign taxonomy changes
  • Full security audit: verify MFA enabled for all accounts across all platforms, review Bloomerang audit logs for anomalies, confirm PCI DSS compliance of payment processors
  • Data quality deep clean: comprehensive duplicate scan, address standardization, email bounce cleanup from Mailchimp
  • Integration health check: test all Zapier workflows end-to-end, verify QuickBooks sync accuracy, confirm Mailchimp segment sync
  • Dataro model performance review (Phase 3): compare predicted vs. actual lapse rates for the quarter, assess if model accuracy is improving or degrading
  • Client training refresher: 30-minute session to address questions, train new staff, introduce any new features
  • Annual Maintenance Tasks (8-12 hours/year):
  • Full Bloomerang configuration review: update custom fields, campaign/fund codes for new fiscal year, archive inactive campaigns
  • Power BI dashboard annual refresh: update benchmark reference lines with latest FEP/AFP industry data, create year-over-year comparison views
  • DonorSearch re-screening (if deployed): re-screen entire database to capture changes in donor giving capacity
  • Compliance review: update privacy policy, review AI governance policy with board, verify compliance with any new state privacy laws
  • Software vendor review: assess Bloomerang, Dataro, Power BI pricing changes and feature updates; evaluate if the current stack still best serves the client
  • Dataro model retraining trigger: request model retrain if retention rate predictions deviate more than 15% from actual outcomes, or if the organization undergoes a major shift (merger, rebranding, significant donor base change)
  • SLA Considerations:
  • Response time for critical issues (dashboard down, integration broken, data breach): 2-hour response, 8-hour resolution
  • Response time for standard issues (report not refreshing, minor data discrepancy): 1-business-day response, 3-business-day resolution
  • Response time for enhancement requests (new dashboard page, new integration): scoped within 5 business days, delivered per SOW
  • Escalation Paths:
  • Tier 1 (client self-service): Bloomerang's built-in help center and chat support for CRM questions
  • Tier 2 (MSP support): MSP help desk for integration issues, data quality problems, dashboard modifications
  • Tier 3 (vendor escalation): Bloomerang technical support for API/platform issues; Microsoft Power BI support for service outages; Dataro customer success team for model accuracy concerns
  • Emergency: MSP on-call for suspected data breaches — initiate incident response plan, notify client within 1 hour, assess scope within 4 hours

Alternatives

Salesforce Nonprofit Cloud + Einstein AI Path

Replace Bloomerang with Salesforce Nonprofit Cloud, leveraging the Power of Us program (10 free licenses for eligible nonprofits). Use Salesforce Einstein AI for built-in predictive analytics including donor scoring, opportunity insights, and automated next-best-action recommendations. Add Tableau CRM (formerly Einstein Analytics) for dashboards instead of Power BI. This is a single-vendor approach with the most powerful customization capabilities.

Blackbaud Raiser's Edge NXT Path

Use Blackbaud Raiser's Edge NXT as the CRM platform, leveraging Blackbaud's built-in analytics suite including the Blackbaud Intelligence platform for donor scoring, wealth screening via Blackbaud's ResearchPoint, and native dashboards. This is the dominant platform among large, established nonprofits and offers the most comprehensive fundraising-specific feature set.

Budget Path: Little Green Light + Google Looker Studio

For nonprofits with very limited budgets, use Little Green Light ($45–$90/month depending on record count) as the CRM and Google Looker Studio (free) for dashboards. Skip the AI prediction layer entirely and rely on manual RFM analysis using the custom Python scoring script provided in this guide. Use Google Workspace for Nonprofits (free) instead of Microsoft 365.

Neon CRM + Virtuous BI Path

Use Neon CRM (Neon One) as the primary CRM with its revenue-based pricing model ($99–$399/month), and add Virtuous's BI analytics layer or pair with Dataro for AI predictions. Neon CRM offers a more comprehensive feature set than Bloomerang including event management, membership tracking, and peer-to-peer fundraising built in.

DonorPerfect + Power BI + Keela Smart Ask Path

Want early access to the full toolkit?