Bi-annual report

Payer Patient Access API Scorecard Methodology

Complete methodology for the Flexpa Payer API Scorecard. Learn how we measure, score, and report on Patient Access API quality across health insurance payers.

493
Payers Evaluated
112,500+
Patient Authorization Attempts Analyzed
6
Months of Data Reviews

Overview

The CMS Interoperability and Patient Access final rule (CMS-9115-F), effective March 2021, mandates that Medicare Advantage, Medicaid, CHIP, and Qualified Health Plan (QHP) issuers implement standardized Patient Access APIs using HL7 FHIR Release 4.0.1 and the CARIN Alliance Blue Button 2.0 Implementation Guide. These APIs must support SMART Application Launch Framework for authorization, enabling patients to share their claims and clinical data with third-party applications. While the rule established baseline technical requirements, real-world implementation quality varies dramatically.

The Flexpa Payer API Scorecard provides a data-driven evaluation of payer API quality based on production usage patterns, not synthetic testing. This methodology explains our data collection, scoring calculations, and interpretation framework to create transparency around API performance.

Regulatory compliance is the baseline—the best APIs exceed requirements to deliver exceptional patient and developer experiences. We evaluate APIs across two categories:

  • Core Implementation (meeting fundamental requirements)
  • Beyond Compliance (delivering advanced features that improve usability)

November 2025 Scorecard

Review the latest results and findings. Read the report here →

Scoring Framework

The scorecard uses a 140-point system. Core Implementation (100 points) measures fundamental functionality for production readiness. Beyond Compliance (40 points) rewards features that meaningfully enhance patient and developer experience beyond regulatory minimums.

Core Implementation (100 points)

Evaluates whether APIs are functional, reliable, and meet CMS rule requirements across two dimensions:

Infrastructure & Authorization (60 points):

  • Endpoint availability (10 pts)
  • Developer resources including sandbox, documentation, and test patients (15 pts)
  • Authorization success rates (20 pts)
  • Coverage completeness including refresh token support and all required Lines of Business (15 pts)

API Reliability (40 points):

  • FHIR request success rates (20 pts)
  • Reference resolution between resources (10 pts)
  • Refresh operation success when patients sync updated data (10 pts)

All metrics reflect production traffic, not synthetic testing.

Beyond Compliance (40 points)

Rewards features demonstrating commitment to patient empowerment and developer experience beyond regulatory requirements.

Patient Experience (28 points):

  • Refresh token duration supporting long-term access (9 pts)
  • Inactive member data access after leaving a plan (5 pts)
  • No portal registration requirement (5 pts)
  • Branded identity providers for trust (2 pts)
  • Clear eligibility error messaging (2 pts)
  • Voluntary commercial insurance support beyond CMS-mandated plans (5 pts)

Developer Features (12 points):

  • Clinical resources beyond minimum CARIN Blue Button requirements (5 pts)
  • Sub-minute sync performance (5 pts)
  • FHIR $everything operation for efficient bulk retrieval (2 pts)

Data Collection Methodology

All metrics in this scorecard are derived from real production usage through the Flexpa platform. We do not rely on payer self-reporting or synthetic test scenarios. Every authorization attempt, API request, and sync operation that flows through our platform contributes to these measurements.

Measurement Windows

Most metrics use rolling 6-month windows to balance statistical significance with temporal relevance. This timeframe typically yields sufficient sample sizes (n > 30) for reliable rate calculations while capturing recent performance changes. Authorization success rates, for instance, aggregate all attempts over 6 months, smoothing daily variance while reflecting current operational state. Growth metrics compare the most recent 6 months against the prior 6 months to identify trends independent of seasonal effects.

Some metrics analyze only the most recent sync job per payer. This approach sacrifices statistical robustness for query performance and currency. We use single-sync sampling for $everything operation detection and FHIR API success rates.

Eligibility Criteria

The scorecard includes payers subject to the CMS Interoperability and Patient Access final rule. Specifically, organizations must have CMS-reportable covered lives and at least one live endpoint configured in our system. Payers without configured endpoints or those that have explicitly blocked access are listed but receive minimal scores.

Handling Unknown Values

Not all metrics can be calculated for all payers. When we lack sufficient data to evaluate a metric, we mark it as "unknown" and assign 0 points. This differs from a false value, where we have data confirming the feature is absent. For example, if we've never seen a successful authorization for a payer, we cannot calculate authorization success rate—that's unknown. If we've seen authorizations but none included refresh tokens, that's a confirmed false. This distinction maintains transparency about our measurement confidence.

Usage Statistics

Before diving into scored metrics, we provide context about overall API adoption and usage patterns. These metrics don't contribute to scores but help interpret the results. A high-scoring API with low usage might indicate recent improvements, while a lower-scoring API with high usage might be more critical to address.

Patient Authorization Attempts

The total number of times patients have attempted to authorize access through this payer's API. We exclude "bounced" attempts where patients immediately exited without completing the authorization flow, as these typically do not indicate substantive attempts. This metric provides a sense of overall API usage, adoption, and interest. We hope this metric can encourage payers to invest in these APIs that patients are actively utilizing.

Success, Abandon, and Error Rates

What percentage of authorization attempts succeed? How often do patients abandon the flow partway through? How often do technical errors occur? These percentages sum to 100% of non-bounced attempts and reveal the patient experience during authorization. Successful authorizations (states: EXCHANGED, REVOKED, or AUTHORIZED) indicate the patient completed the flow and the payer returned valid credentials. Abandons mean the patient started but didn't finish. Errors indicate technical failures on the payer side.

Growth Rate

Comparing authorization attempts in the last 6 months versus the prior 6 months shows whether API usage is growing or declining. Positive growth might indicate improving reliability or expanding developer adoption. Negative growth could suggest issues are driving users away. This metric is also impacted by Flexpa's growth and adoption fluctuation in certain service areas.

Top Abandon Reason

When patients abandon authorization flows, we capture the method they used (e.g., clicked "Cancel", closed the browser, encountered an error screen). The most common abandon method for each payer, along with its frequency, reveals where patients are getting stuck. We exclude timeout-based abandons as these are often ambiguous. As the third party developer, we lack deeper visibility during the OAuth process and really encourage payers to improve logging and troubleshooting during the authentication process.

Core Implementation (100 points)

This query evaluates the foundational elements of a functional Patient Access API: Is it available? Can developers test it? Does authorization work? Can we sync FHIR resources?

Endpoint Status (10 points)

The most basic question: Is the API available for production use? We track endpoint status based on our testing and validation.

Scoring:

  • 10 points: Connected
  • 5 points: In progress (We're currently integrating with the payer)
  • 2 points: Broken (We're tracking it, but it's not working)
  • 0 points: Unavailable or Unknown (We haven't found a developer portal or contact)

Sandbox Environment (8 points)

The 21st Century Cures Act requires certified health IT to provide production-like testing environments. Sandboxes prevent developers from polluting production audit logs, triggering security alerts, or accidentally accessing real patient data during integration work. We detect sandbox availability by checking for test endpoints in our system either for the specific endpoint, a parent organization, or a vendor. Sandboxes accelerate onboarding, reduce support burden, and demonstrate commitment to ecosystem enablement.

Scoring:

  • 8 points: Sandbox Available
  • 0 points: Sandbox Unavailable

Sandbox Test Patient (4 points)

Sandbox environments are valueless without functional test data. We confirm test patient availability by detecting successful authorizations in TEST mode. Ideally, test patients span edge cases: multiple coverage periods, various claim types, inactive members, different LOBs. Without working test credentials, developers must debug blindly or risk production testing—lengthening integration timelines and increasing patient-facing errors.

Scoring:

  • 8 points: Sandbox Test Patient Available
  • 0 points: Sandbox Test Patient Unavailable

FHIR CapabilityStatement (3 points)

FHIR servers should expose CapabilityStatement resources at the metadata endpoint per FHIR R4 specification, documenting supported resources, operations, and search parameters. While specifications theoretically eliminate the need for extensive documentation, real-world implementation variance makes these resources essential for debugging and onboarding.

Scoring:

  • 3 points: CapabilityStatement Available
  • 0 points: CapabilityStatement Unavailable

SMART Configuration (3 points)

SMART on FHIR implementations must provide SMART configuration details at [BASE_URL]/.well-known/smart-configuration containing OAuth 2.0 endpoints, supported grant types, and capabilities. This metadata is essential for applications to dynamically discover authorization endpoints and understand the server's SMART capabilities.

Scoring:

  • 3 points: SMART Configuration Available
  • 0 points: SMART Configuration Unavailable

Developer Portal (2 points)

Developer portals with documentation, client registration, and support contacts further streamline integration. These resources help developers understand payer-specific implementation details and provide support channels when issues arise.

Scoring:

  • 2 points: Developer Portal Available
  • 0 points: Developer Portal Unavailable

Authorization Success Rate (20 points)

The highest-weighted metric in infrastructure scoring is for authorization success rates. This score aggregates outcomes over 6 months of patient attempts (excluding immediate bounces). This metric reflects the entire SMART on FHIR authorization flow: identity provider stability, eligibility verification systems, redirect handling, and token exchange reliability. High success rates (≥95%) indicate production-grade identity infrastructure. Low rates reveal systemic issues—authentication timeouts, eligibility verification false negatives, misconfigured OAuth scopes, or fragile identity provider dependencies. Authorization failures are patient-facing, directly eroding trust in both the payer and the consuming application like Flexpa.

Scoring:

  • 20 points: ≥95% success rate
  • 17 points: 90-94% success rate
  • 14 points: 80-89% success rate
  • 10 points: 70-79% success rate
  • 6 points: 60-69% success rate
  • 3 points: 50-59% success rate
  • 0 points: <50% or unknown

Refresh Token Support (5 points)

OAuth 2.0 access tokens are typically short-lived (15 minutes to 1 hour) for security. Without refresh tokens, patients must re-authorize through the full SMART launch flow each time access expires—disrupting workflows and degrading trust. Refresh tokens enable applications to obtain new access tokens without user interaction, supporting continuous data synchronization essential for longitudinal health records. Refresh tokens are fundamental to practical API usability.

Scoring:

  • 5 points: Refresh Tokens Supported
  • 0 points: Refresh Tokens Not Supported

All CMS Lines of Business (5 points)

The CMS Interoperability final rule applies to Medicare Advantage (Part C), Medicaid managed care, CHIP, and Qualified Health Plans on the federal and state exchanges. We verify against industry health plan data that all applicable LOBs are represented in the payer's endpoint configuration. Patients shouldn't need to understand payer organizational structure to access their data.

Scoring:

  • 5 points: All Required Lines of Business Available
  • 0 points: Missing One or More Lines of Business

FHIR API Success Rate (20 points)

After successful authorization, the real work begins: fetching patient data via FHIR API requests. This query measures how reliably those requests succeed. These metrics reflect actual production traffic patterns, not contrived test scenarios. When we make FHIR requests to fetch resources, what percentage return HTTP 200? Occassionally we see 200s across the board but 0 resources are fetched. This would be a failing FHIR request. We calculate this from the most recent sync job per payer, examining all FHIR resource requests in that job. This metric captures whether the payer's FHIR server is stable, properly handling search queries, and returning valid responses.

Scoring:

  • 20 points: ≥99% success rate
  • 16 points: 95-98% success rate
  • 12 points: 90-94% success rate
  • 8 points: 80-89% success rate
  • 4 points: 70-79% success rate
  • 2 points: 50-69% success rate
  • 0 points: <50% or unknown

Reference Resolution Success Rate (10 points)

FHIR's resource model relies heavily on references between resources. For example, ExplanationOfBenefit resources reference Practitioners, Organizations, and Patients; Coverage resources reference subscribers and beneficiaries. The FHIR specification defines reference formats (relative, absolute, logical) and expects servers to support resolution. When reference resolution fails, applications receive incomplete data graphs requiring defensive coding and degrading clinical context. We track requests specifically for reference resolution and measure success rates based on HTTP 200 responses. High failure rates indicate poor database integrity or incomplete FHIR server implementations.

Scoring:

  • 10 points: ≥95% success rate
  • 8 points: 90-94% success rate
  • 6 points: 80-89% success rate
  • 4 points: 70-79% success rate
  • 2 points: 60-69% success rate
  • 0 points: <60% or unknown

Refresh Sync Success Rate (10 points)

When a patient returns to an application after their initial authorization, the app typically triggers a refresh sync to pull updated data (new claims, updated coverage, additional procedures). These refresh syncs use the refresh token we discussed earlier. We measure what percentage of refresh jobs completed successfully over the last 6 months. Failed refresh syncs frustrate patients who expect to see their latest information and force developers to implement complex retry logic or re-engage the member.

Scoring:

  • 10 points: ≥95% success rate
  • 8 points: 90-94% success rate
  • 6 points: 80-89% success rate
  • 4 points: 70-79% success rate
  • 2 points: 60-69% success rate
  • 0 points: <60% or unknown

Beyond Compliance (40 points)

The CMS rule establishes a baseline, but exceptional payers who see the potential win-win in patient access go further. This section of the scorecard evaluates features that meaningfully improve patient and developer experience for real widespread adoption.

Refresh Token Duration (9 points)

How long can a patient maintain access before they must re-authorize? Short refresh token lifetimes (30 days or less) force patients to repeatedly sign in, defeating the purpose of refresh tokens and increaseing patient dissatisfaction. This metric directly impacts patient convenience and third party application reliability.

Scoring:

  • 9 points: Indefinite
  • 6 points: 180-365 days
  • 4 points: 90-179 days
  • 2 points: 30-89 days
  • 1 point: 1-29 days
  • 0 points: Unknown

Inactive Member Access (5 points)

When patients leave a health plan (changing employment, aging into Medicare, relocating) their historical claims data shouldn't disappear. The CMS rule requires payers to maintain API access "for a period appropriate to and consistent with applicable law". This deliberately vague language has resulted in wide implementation variance. Some payers maintain 5+ years of inactive member access, enabling true longitudinal records. Others terminate access almost immediately after coverage ends, fragmenting patient data across life transitions. Longer retention periods support continuity of care, appeal rights, and patient ownership of their health information. Short or non-existent inactive member access defeats the spirit of the patient access effort.

Scoring:

  • 5 points: ≥1825 days (5+ years)
  • 4 points: 730-1824 days (2-5 years)
  • 3 points: 365-729 days (1-2 years)
  • 2 points: <365 days
  • 0 points: Not supported or unknown

No Portal Opt-in Required (5 points)

Some payers gate API access behind member portal registration or opting-in to a default opt-out sharing option. This additional barrier causes patients to abandon the connection flow as it's an additional hurdle they must go through. The consent process built-in to the SMART on FHIR process should eliminate the need for another opt-in.

Scoring:

  • 5 points: No Portal Opt-in Required
  • 0 points: Portal Opt-in Required

Branded Identity Provider (2 points)

Many payers use white-labeled vendor solutions for their Patient Access APIs. While these can work well, they sometimes create confusing experiences where patients are redirected to unfamiliar vendor domains during authorization. These authorization pages are usually extremely barebones, have no branding or logos that are familiar to the user,and all together make the pages look untrustworthy. We award 2 points to payers using their own branded identity providers, as this typically indicates tighter integration and a more coherent patient experience.

Scoring:

  • 2 points: Branded Identity Page
  • 0 points: Unbranded Identity Page

Eligibility Error Messages on Callback (2 points)

When patients attempt authorization but aren't eligible (coverage hasn't started, membership has lapsed, wrong line of business) clear error messaging is critical. The best payers return structured eligibility errors that are distinguishable from authentication failures or system errors. This enables applications to present actionable guidance. Without explicit eligibility signals returned programmatically to the application's callback, applications receive opaque failures or none at all if the errors are on the authorization interface only. Patients see an error message they don't understand, especially in the case of eligibility issues,

Scoring:

  • 2 points: Clear Eligibility Error Messages
  • 0 points: No Clear Eligibility Error Messages

Commercial Lines of Business (5 points)

The CMS Interoperability rule targets government-funded coverage: Medicare Advantage, Medicaid, CHIP, and ACA marketplace. Commercial employer-sponsored insurance, covering approximately 160 million Americans, remains exempt from federal Patient Access API mandates. Payers voluntarily extending their FHIR APIs to commercial populations earn 5 points, as this dramatically expands data access beyond the regulatory floor. This is particularly valuable for continuity (i.e patients transitioning between employer to Medicare), and for patient confusion (why can I not use this?.

Scoring:

  • 5 points: Commercial Lines of Business Supported
  • 0 points: Commercial Lines of Business Not Supported

Clinical Resources Available (5 points)

The CARIN Blue Button Implementation Guide mandates three financial resources: Patient (demographics), Coverage (insurance), and ExplanationOfBenefit (adjudicated claims). However, comprehensive interoperability requires clinical data like Condition (diagnoses), Procedure, MedicationRequest, Observation (vitals, labs), Immunization, AllergyIntolerance, CarePlan, and more from the US Core profiles. Payers increasingly adjudicate claims alongside clinical encounters, creating opportunities to expose richer data through Patient Access APIs. We count distinct FHIR resource types beyond the required three based on the last 6 months of production requests. Availability of clinical resources enables applications to support care coordination, medication reconciliation, and clinical decision support.

Scoring:

  • 5 points: ≥8 clinical resources
  • 4 points: 6-7 resources
  • 3 points: 4-5 resources
  • 2 points: 2-3 resources
  • 1 point: 1 resource
  • 0 points: 0 resources

Fast Sync Speed (5 points)

Sync latency is the time from authorization completion to data availability. We measure median sync duration for completed jobs over 6 months. Sub-60-second syncs (median) earn full 5 points. Sync time reflects FHIR server response latency, pagination efficiency, database query optimization, and network infrastructure. Multi-minute syncs force developers to implement polling mechanisms, progress indicators, and timeout handling—adding complexity. Patients perceive slow syncs as failures, abandoning applications or doubting authorization success. Fast syncs enable synchronous authorization flows where patients see their data immediately after granting access.

Scoring:

  • 5 points: <60 seconds median sync time
  • 3 points: 60-120 seconds median sync time
  • 1 point: 120-300 seconds median sync time
  • 0 points: >300 seconds or unknown

$everything Support (2 points)

We check the most recent sync job per payer for usage of the $everything operation. This is a binary metric - either we've observed successful $everything requests or we haven't. While worth only 2 points (it's a convenience, not essential), $everything support significantly reduces integration complexity for developers and signals a mature FHIR implementation.

Scoring:

  • 2 points: $everything Operation Supported
  • 0 points: $everything Operation Not Supported

Data Quality and Limitations

This methodology is designed for transparency, but it's important to understand what these scores can and cannot tell you.

Representativeness

All metrics derive from production traffic through Flexpa's infrastructure. This provides authentic signal of real patients authorizing real apps, but introduces sampling bias. Payers with broader Flexpa adoption see more authorizations, yielding statistically robust rate calculations and better edge case coverage (inactive members, refresh token expirations, reference resolution failures). Payers with limited Flexpa traffic may have sparse or non-existent samples, resulting in "unknown" metrics. This methodology cannot observe direct payer-to-app integrations bypassing our platform, potentially underrepresenting usage. This usage does however take into account production test patients, so we welcome payers to share their production test uesr credentials with us for updated scoring.

Temporal Accuracy

Most metrics use 6-month rolling windows to balance statistical significance with recency. This means improvements or degradations in API quality take time to fully reflect in scores. A payer that fixed a major issue last week will still show the impact of that issue in metrics covering the past 6 months. Some metrics (like $everything support or FHIR API success rate) look only at the most recent sync job, providing a current snapshot.

Unknown versus False

When a metric shows "unknown," it means we lack sufficient data to make a determination. It is not that we've confirmed the feature is absent. This is particularly common for newer payers, those with limited usage, or features that require specific conditions to observe (like inactive member access, which requires attempting authorization after coverage ends). A false value, in contrast, means we have data confirming the condition is not met. Both score 0 points, but they mean different things about our confidence in the measurement.

Scope Limitations

This scorecard evaluates Patient Access APIs, the member-facing FHIR endpoints mandated under CMS-9115-F, exclusively. It does not measure other interoperability requirements: Provider Directory APIs (CMS-9115-F), Payer-to-Payer exchange (CMS-0057-F), Prior Authorization APIs (CMS-0057-F), or the Admission/Discharge/Transfer (ADT) event notification requirements. Nor do we assess data quality—completeness of claims histories, accuracy of procedure codes, richness of clinical details. We only measure transport layer reliability. An API scoring 130 points might still return sparse or outdated data if the underlying systems lack integration. Conversely, a payer with comprehensive data but unreliable infrastructure will score lower despite good data stewardship.

Non-Scorecard Analyses

This edition's report includes four analyses that quantify challenges, progress, and complexity across the payer ecosystem outside of individual scorecards.

Analysis 1: The Road to Establishing Contact

This measure tracks how long we've attempted to connect with payers where we've identified an endpoint but still haven't established a working connection. This reveals the scale of ongoing connection efforts and highlights organizations where we've been unable to establish contact despite extended attempts.

Buckets:

  • Less than 1 month (0-29 days)
  • Less than 3 months (30-89 days)
  • Less than 6 months (90-179 days)
  • Less than 1 year (180-364 days)
  • Less than 2 years (365-729 days)
  • Over 2 years (730+ days)

Analysis 2: Time to Production

This analysis measures time from initial endpoint discovery to successfully establishing a working connection. This analysis tracks the duration of successful integration efforts, providing insight into realistic timelines for developers and demonstrating to payers how lengthy these processes are. This shows the distribution of integration timelines and highlights both quick implementations and lengthy processes. We're always pleased when payers proactively reach out to collaborate—this collaborative approach significantly accelerates the process.

Buckets:

  • Less than 1 month
  • Less than 3 months
  • Less than 6 months
  • Less than 1 year
  • Less than 2 years
  • Over 2 years

Analysis 3: Custom Configuration Requirements

HL7 FHIR R4 and SMART on FHIR define explicit implementation patterns: standard HTTP headers (application/fhir+json), OAuth 2.0 bearer tokens, FHIR-defined search parameters (_count, patient), and consistent pagination. These specifications exist to enable plug-and-play interoperability. Applications should be able to work across any compliant server without custom code. Reality diverges significantly. We've created 12 categories of custom configurations to accommodate payer-specific deviations (see table below). Each customization represents technical debt, increased integration costs, and barriers to ecosystem growth. This reveals the extent of non-standard implementations across the payer ecosystem and the engineering burden required to support them.

Buckets:

  • 1 custom config type
  • 2 custom config types
  • 3 custom config types
  • 4+ custom config types
ConfigurationField NameStandard BehaviorWhy It's Custom
Custom Accept Headercustom_accept_headerapplication/fhir+json or application/fhir+xmlPayer requires non-standard Accept header value
Custom Header Keycustom_header_keyOnly standard HTTP headers (Authorization, Content-Type, etc.)Payer requires additional proprietary HTTP headers
Custom EOB Date Searcheob_date_search_paramdate or FHIR-defined parameters like service-datePayer uses non-standard date parameter names for EOB searches
Custom Offset Parameteroffset_param_name_offset for paginationPayer uses different parameter name for pagination offset
Custom Count Parametercount_search_param_count for page sizePayer uses different parameter name for page size
mTLS (Mutual TLS)use_mtlsOAuth 2.0 bearer tokens onlyRequires client certificate authentication at TLS layer, adding PKI management complexity beyond standard OAuth 2.0
ID Token Bearerid_token_bearerAccess token in Authorization headerConflates OAuth 2.0 access tokens with OpenID Connect ID tokens. ID tokens verify identity to the client, not authorize API access—violates OAuth/OIDC separation of concerns
ID Token Headerid_token_headerAccess token in Authorization header onlyRequires passing OIDC ID token in custom header alongside access token—non-standard dual token pattern
Underscored Patient Searchunderscored_patient_searchpatient parameter (lowercase, no underscore)Payer uses _patient or other underscore-prefixed variations
Skip Everythingskip_everything_everything parameter to retrieve all patient dataPayer's _everything implementation is broken/incomplete
Specific Include Resourcesspecific_include_resourcesStandard _include with FHIR-defined resource relationshipsPayer requires non-standard include patterns or specific resource combinations
Custom Token Headercustom_token_header_keyStandard OAuth 2.0 token endpoint parametersPayer's token endpoint requires additional custom headers

Analysis 4: Abandonment Reasons

One area we've worked to improve is our understanding of why patients abandon the connection process. The underlying reasons are varied: payer errors, eligibility issues, forgotten login credentials, or patients who don't have online accounts at all. This analysis tracks abandonment patterns over time to identify trends and common pain points. This data helps us understand where patients are getting stuck and illustrates the lengths Flexpa must go to to understand user behavior without direct visibility.

Buckets:

  • Monthly trends in abandonment volume
  • Most common abandonment methods (e.g., clicked "Cancel", closed browser, encountered error screen)
  • Changes in abandonment patterns over time

Questions and Feedback

This methodology will evolve as we learn more about what matters for Patient Access API quality. If you have questions, spot issues, or want to suggest improvements, email interop@flexpa.com