Complete Oracle Infrastructure
Migration to OCI
Architected and implemented a comprehensive Oracle ecosystem migration from legacy on-premises infrastructure to OCI — including complete landing zone setup, hybrid networking with FastConnect, and advanced security frameworks across multiple connecting projects.
A large enterprise needed to migrate their entire Oracle ecosystem from legacy on-premises infrastructure to OCI while maintaining zero downtime, ensuring enhanced security, and preparing for future scalability across multiple connecting projects.
- Terraform IaC: Automated infrastructure provisioning with version-controlled deployments
- OCI Landing Zone: Complete foundational setup with multi-tenancy architecture
- Hybrid Networking: FastConnect integration with hub-spoke topology design
- Exadata Clusters: Virtual clustered database infrastructure for high performance
- Application Integration: E-Business Suite, Analytics platforms, and connecting systems
- Zero Trust Architecture: Identity-based perimeter with continuous verification
- OCI Security Zones: Policy-enforced compartments preventing misconfigurations
- Cloud Guard Activation: Continuous posture monitoring across all compartments
- Vault & KMS: Customer-managed encryption keys across all data stores
- Audit Service: Complete API and console activity logging with retention policy
- Wave-Based Migration: Non-critical systems first, production databases last with validated rollback procedures
- Parallel Run Period: On-premises and OCI environments run simultaneously during cutover validation
- Stakeholder Management: Technical complexity translated into business impact reporting for leadership
- Post-Migration Optimisation: Reserved instance purchasing and rightsizing delivering the 40% cost reduction post-stabilisation
Oracle EBS Government
Cloud Transformation
Led the cloud transformation of an Oracle E-Business Suite deployment for a Qatar government entity — migrating mission-critical ERP infrastructure to OCI with data sovereignty requirements, audit-grade security posture, and zero disruption to live government operations.
A Qatar government authority running Oracle EBS on ageing on-premises infrastructure faced mounting hardware risk, limited scalability, and upcoming compliance audit requirements. The ERP system was mission-critical — any disruption would impact government service delivery. Data sovereignty mandated that all data remain within Qatar's borders. The migration had to be executed without affecting live operations during active government service periods.
- OCI GCC Region: Deployment to OCI UAE or Qatar-compliant region satisfying data sovereignty requirements
- EBS Lift & Replatform: Oracle EBS database tier migrated to Exadata Cloud Service; application tier rightsized on OCI compute
- Hybrid Connectivity: FastConnect private link maintaining on-premises integration during transition period
- Database Cloning: Production clone used for migration testing and validation prior to cutover
- Network Segmentation: Dedicated VCN compartment with NSG-enforced access controls between EBS tiers
- Data Residency: All EBS data, backups, and logs confined to Qatar-compliant OCI region
- Audit Logging: Full OCI Audit Service and EBS audit trail enabled with tamper-evident log retention
- IAM Segregation: Role-based access for DBA, system admin, and application admin with MFA enforced
- Encrypted Backups: Automated OCI Object Storage backups with customer-managed encryption keys
- Stakeholder Management: Coordinated across IT, finance, and senior leadership with technical progress translated into executive reporting
- Change Management: Cutover windows agreed during low-activity government periods with rollback triggers defined
- Vendor Coordination: Managed Oracle Support, OCI account team, and internal network teams as a single point of technical authority
Multi-Cloud FinOps &
Cost Reduction Programme
Designed and executed a multi-cloud cost optimisation programme across OCI, AWS, and Azure — identifying wasteful spend, rightsizing infrastructure, eliminating idle resources, and implementing governance frameworks to prevent cost drift from recurring.
A regional enterprise operating across OCI, AWS, and Azure had seen cloud spend grow 3x over 18 months with no corresponding business growth. No tagging policy meant cost allocation was impossible. Compute instances were over-provisioned at deployment and never reviewed. Development environments ran 24/7. No reserved instance strategy was in place. Finance had no visibility — billing appeared as a single monthly number with no breakdown.
- Resource Rightsizing: Instance size analysis across all three clouds — 40% of compute was over-provisioned by 2x or more
- Idle Resource Elimination: Automated discovery and decommission of unattached storage volumes, unused load balancers, and orphaned snapshots
- Reserved & Committed Use: 1-year reserved instance strategy across stable production workloads delivering 35–40% unit cost reduction
- Dev Environment Scheduling: Auto-stop/start for non-production environments — 65% reduction in dev environment costs
- Storage Tiering: Cold data moved to archival tiers; hot-tier misclassified data corrected
- Tagging Taxonomy: Mandatory cost allocation tags enforced via policy — environment, project, owner, department
- Budget Alerts: Cloud-native budget thresholds with escalation notifications at 80% and 100% of monthly allocation
- FinOps Dashboard: Cross-cloud cost visibility dashboard giving finance real-time spend by team and project
- Monthly Review Cadence: Structured cost review process established with engineering and finance stakeholders
- Database CPU Optimisation — The Hidden Billing Lever: Cloud elasticity creates a dangerous blind spot. After migration, auto-scaling absorbs performance degradation silently — teams stop tuning because the workload "feels fine." But unoptimised queries, missing indexes, and inefficient execution plans are now running on metered CPU billed by the hour, every hour. In a cloud environment, every poorly-written query is a direct and recurring line item on the monthly invoice. A structured database workload review — execution plan analysis, index optimisation, query refactoring — delivered significant CPU utilisation reduction across production database tiers. CPU is the primary cost driver on database compute. Optimising the workload, not just the instance, is where the material savings are found.
- Kubernetes Node Autoscaling — Right-Sizing at the Pod Level: Autoscaling is widely deployed. Cost-efficient autoscaling is rare. The default Kubernetes node autoscaler provisions the smallest node in the configured pool range when a pod cannot be scheduled — which, in most enterprise configurations, means a significantly over-provisioned instance for a small workload. In a production microservices environment running 70 services with peaks reaching 350 nodes, the default autoscaler was provisioning large general-purpose instances to accommodate pods with a fraction of their resource requirements. Karpenter replaced the default autoscaler — provisioning exactly the instance type and size matching the actual resource request of each pending pod. The result was a step-change reduction in node count at equivalent or better application performance. The principle: autoscaling must be understood at the pod-resource level, not just the "scale out / scale in" level, for it to be a cost control rather than a cost amplifier.
Legacy Modernisation to
Cloud-Native Architecture
Led the architectural transformation of a UK enterprise from a monolithic on-premises application to a cloud-native microservices architecture on AWS/Azure — including containerisation, Kubernetes orchestration, and fully automated CI/CD pipelines.
A UK-based enterprise was operating a decade-old monolithic application stack that could not scale to meet business growth, had release cycles measured in months, and had accumulated significant technical debt. The objective was a complete modernisation to cloud-native architecture — containerised microservices, automated deployment pipelines, and infrastructure-as-code across a hybrid cloud environment.
- Microservices Design: Monolith decomposed into independently deployable containerised services
- EKS Setup Strategy: Elastic Kubernetes Service configuration with auto-scaling and pod disruption budgets
- CI/CD Pipeline Design: Automated build, test, and deployment pipeline architecture with rollback capability
- Terraform IaC: Infrastructure automation across development, staging, and production environments
- Security Framework: Comprehensive IAM, network policies, and secrets management across container workloads
- Architecture Authority: Led all technical architecture decisions and platform selections
- Cross-Team Coordination: Managed DevOps specialists, developers, and infrastructure teams
- Risk Management: Identified and mitigated transformation risks across the full delivery lifecycle
- Executive Reporting: Translated technical complexity into business value for UK senior leadership
9-Layer Defence Architecture:
GCC Government Cloud Security
Designed and implemented a comprehensive defence-in-depth security architecture across a GCC government OCI environment — nine independent, interdependent security layers from perimeter WAF through to continuous posture management, eliminating a fragmented point-solution approach that had left critical gaps between controls.
A GCC government entity operating on OCI had deployed security controls in silos — each team owned one layer (network, IAM, database, monitoring) with no unified architecture connecting them. Each layer assumed the layers around it were correctly configured. They were not. A routine review uncovered open ingress rules on private subnets, over-provisioned service accounts with no MFA, database credentials hardcoded in application repositories, and no continuous posture monitoring. No breach had occurred — but the conditions for one existed at multiple points simultaneously.
- Layer 1 — WAF: OCI WAF deployed in front of all public-facing workloads. SQL injection, XSS, bot mitigation, and DDoS protection at the application layer — evaluated before traffic reaches any infrastructure component.
- Layer 2 — Perimeter Firewall: OCI Network Firewall (Palo Alto-powered) in hub VCN for deep packet inspection, IPS, URL categorisation, and SSL inspection. All spoke-to-spoke and spoke-to-internet traffic forced through the hub.
- Layer 3 — NSGs, Route Tables & NACLs: Security Lists and NSGs locked to minimum required ports. Every route table audited — a single incorrect default route was creating a path to the internet gateway that security groups alone could not prevent.
- Layer 4 — Private Subnets: All databases, internal APIs, and backend services redesigned into private subnets — no public IP, no internet gateway route. Architectural isolation, not a firewall rule.
- Layer 5 — Vaults & Secrets Management: OCI Vault deployed. All credentials migrated out of application code and CI/CD pipeline variables. Code holds references only — values resolved at runtime from Vault. Hardcoded credentials eliminated across all environments.
- Layer 6 — IAM & Zero Trust Access: Full IAM audit conducted. Least-privilege RBAC enforced across all compartments. MFA mandated for all human users. Instance Principals for service-to-service auth — no shared credentials. Unused accounts and over-provisioned policies removed systematically.
- Layer 7 — Database Security & Encryption: OCI Data Safe deployed for database activity monitoring and SQL firewall. AES-256 encryption at rest with customer-managed keys and rotation policy. Database access restricted to defined application server CIDRs. Designed to function even if every layer above it had failed.
- Layer 8 — Bastion Services: OCI Bastion configured as the sole operational access path to all compute and databases. Session logging enabled. No permanent open SSH or RDP ports. Time-limited sessions only.
- Layer 9 — Cloud Guard & Continuous Posture: Cloud Guard deployed across all compartments with custom detector recipes calibrated to the environment's specific risk profile. Continuous monitoring for misconfiguration drift — findings routed to ticketing system for tracked remediation within defined SLAs.
- Independent Failure Design: Each layer designed to function as a meaningful control even if adjacent layers are misconfigured or bypassed
- Gap Visibility: The spaces between layers are auditable — not just the layers themselves
- Drift Prevention: Terraform IaC with policy-as-code ensures new deployments cannot deviate from the security baseline
- Hub-Spoke Blast Radius: A compromised spoke cannot reach another spoke without traversing hub firewall inspection
- Current-State Audit: Full review of network topology, IAM policies, security group rules, and credential practices before remediation
- Risk-Prioritised Remediation: Critical findings addressed within 72 hours — open ingress, hardcoded credentials, missing MFA
- IaC Codification: All security configurations in Terraform — no manual console changes permitted post-implementation
- Compliance Mapping: Controls mapped to NIST CSF, CIS Controls, and Qatar data sovereignty requirements