Learning centre
A reference for governed AI-native software delivery.
Plain-English definitions of the concepts used across the GrowAppAI platform — what governed delivery means, how traceability and policy gates work, the deployment models we support, and how we think about risk in AI-native software delivery. Use this page as a shared vocabulary for product, engineering, security, and compliance conversations.
Glossary
Each entry has a stable anchor — copy the link icon next to a term to share it directly.
- Governed AI-native software delivery#
- An operating model where AI generates and accelerates parts of the software lifecycle (intent capture, architecture, code, tests, documentation, release artifacts) inside an explicit governance frame — policies, approvals, traceability, and release evidence — rather than around it.
- Control plane#
- The layer that decides what runs, who approves it, and what evidence is recorded. In GrowAppAI, the control plane orchestrates the full lifecycle (intent → architecture → code → CI gates → artifacts → release evidence) so each stage is observable, governable, and auditable from one place.
- Intent-to-evidence traceability#
- An end-to-end link between a business or product requirement and the deployment evidence that proves the requirement was implemented and released. Every artifact in between — architecture decision, code change, pull request, build, policy check, approval — is referenceable from a single chain.
- Approval boundary#
- A point in the lifecycle where progress requires explicit, recorded approval from a defined role (security, architecture, compliance, engineering leadership). Approval boundaries are configurable per workflow and produce signed evidence that becomes part of the release record.
- Policy gate#
- A machine-evaluated check inside the lifecycle (e.g. SAST, license scan, schema validation, dependency policy, model usage policy, prompt-handling policy) whose result determines whether a workflow can advance. Failed gates block progression and create remediation tasks rather than silently passing.
- Release evidence#
- The structured, queryable record of what changed in a release, why it changed, who approved it, which gates ran and passed, and which artifacts were produced. Release evidence is generated by the workflow itself, not assembled manually after the fact.
- Stage (15-stage pipeline)#
- A discrete step in the GrowAppAI lifecycle, from initial intent capture to deployment evidence. Each stage has explicit inputs, outputs, owners, and policy constraints, so risk is reduced incrementally and surface area for late-stage rework is minimized.
- Risk attenuation#
- The progressive reduction of expected economic loss across the lifecycle. As work moves from intent → architecture → code → build → release, governed gates remove a measurable fraction of risk at each stage. The remaining risk is the residual.
- Residual risk#
- The portion of delivery risk that cannot be fully eliminated by upstream controls — typically operational, external, or environmental. GrowAppAI quantifies it explicitly so leadership can decide where to deploy compensating controls instead of pretending the risk is zero.
- SaaS deployment#
- GrowAppAI hosted, operated, and updated by us. Fastest onboarding, lowest operational overhead, suitable for organizations whose data residency and security posture allow externally-hosted control planes.
- Hybrid deployment#
- GrowAppAI's control plane is operated centrally while sensitive workloads (model inference, source code handling, build) execute inside customer-controlled boundaries. Useful when policy or contractual constraints require local execution but central observability is acceptable.
- On-prem deployment#
- GrowAppAI deployed entirely within the customer's network, with no required external dependencies for daily operation. Designed for security-sensitive, regulated, or air-gapped environments where source code, model inference, or telemetry cannot leave the organization.
- SBOM (Software Bill of Materials)#
- A machine-readable inventory of components in a software artifact (libraries, versions, licenses, hashes). GrowAppAI generates and stores SBOMs as part of release evidence so supply-chain provenance is verifiable per release rather than reconstructed during incidents.
- Model evaluation#
- Systematic measurement of an AI model's fitness for a given task — accuracy, prompt sensitivity, cost, latency, and policy adherence. GrowAppAI routes work to evaluated models per task type instead of treating all AI usage as a single, uniform capability.
Frequently asked questions
Shorter answers for buyers, security reviewers, and engineering leaders evaluating GrowAppAI. Long-form material lives in the whitepaper and on the platform pages.
What is GrowAppAI?
GrowAppAI is the control plane for governed AI-native software delivery. It connects business intent, architecture, code, CI controls, artifacts, and release evidence into one governed lifecycle so enterprises can adopt AI in software delivery without losing traceability, approvals, or release confidence.
How is GrowAppAI different from a coding assistant?
Coding assistants accelerate individual developer tasks. GrowAppAI is the governance layer above AI-driven delivery — it decides which work runs, who approves it, what policy gates apply, and what release evidence is produced. It complements AI coding assistants rather than replacing them.
What deployment models are supported?
SaaS, hybrid, and on-prem. The choice typically depends on data residency, regulatory exposure, and whether source code or AI inference can leave the organization. Hybrid and on-prem deployments are first-class — not retrofitted.
Where is my source code stored?
In SaaS, code is processed inside GrowAppAI's hosted infrastructure under contractual data-handling commitments. In hybrid and on-prem deployments, source code stays inside customer-controlled boundaries; only metadata and policy decisions are exchanged with the central control plane (and even that is configurable in on-prem).
Which AI models do you use?
GrowAppAI is multi-model by design. We route tasks to evaluated models — proprietary or self-hosted — based on task type, cost, latency, and policy constraints. Customers can constrain which providers and which model classes are allowed per workflow, and self-host models entirely in regulated deployments.
How do you handle approvals and audit trails?
Approvals and audit trails are part of the workflow itself, not a separate documentation step. Each stage records inputs, outputs, gate results, approvals, and the identity that approved them, producing a queryable release record per change. There is no manual reconstruction.
Do I need to replace my CI/CD?
No. GrowAppAI integrates with your existing CI/CD and source control. It adds governed stages, policy gates, traceability, and release evidence to what your pipelines already do; it does not replace your test runners, build infrastructure, or deployment targets.
Can I use GrowAppAI in regulated environments?
Yes — that's a core design constraint, not an afterthought. On-prem and hybrid deployments are supported for organizations operating under sectoral regulations (financial, healthcare, defence, public sector). Policy gates and approval boundaries are designed to map onto existing control catalogs.
How does pricing work?
Pricing is enterprise-tiered and depends on deployment model, organization size, and required governance scope. Detailed pricing is shared during evaluation conversations rather than published; book a demo and we'll walk you through the model that fits your environment.
How can I see a demo?
Use the Book a Demo button on the homepage or in the header. Demos are tailored to your delivery model (SaaS, hybrid, on-prem) and the parts of the lifecycle you most want to govern. The whitepaper is a useful pre-demo read.
Question not answered here?
Book a demo and we'll walk through your specific deployment model, governance constraints, and lifecycle scope.