Head of Engineering

Who we are

Globacap is a young and innovative FinTech company headquartered in London. We help businesses reach their goals faster by simplifying and driving efficiency in private capital markets through two products: Fundraising Management and Cap Table Management.

We also love technology and believe that emerging technologies will help us better shape the future, so we harvest the power of Blockchain to make our platform best-in-class.

Do you have both imagination and knowledge, like taking ownership and seeing the rewards of your hard work, and want to be part of a rapidly growing company? We’d love to have you join the team.

The Role

We are on the hunt for an outstanding, entrepreneurial, execution-focused Head of Engineering to lead the continual build out of our platform products. This is a hands-on role, leading a team of internal and external developers.

What we require:

  • Be highly proficient full stack programmer, ideally with experience of both dynamic and statically typed languages
  • Have an understanding and experience in developing cloud-native architectures and infrastructure, distributed systems, and operating of those systems in a production setting
  • Have experience with building multi-layered API and microservices architectures
  • Be proficiency with the most common web and software development patterns and frameworks
  • Have experience with identifying, evaluating, and removing blockers for engineering teams
  • Proficient in Ruby (and Rails)

The perfect candidate will also:

  • Be proficient in Javascript (Node.js) and the React framework
  • Understand Blockchain Networks and familiarity with Hyperledger Fabric
  • Be experienced in working with AWS and IBM Cloud
  • Be experienced and understand Agile software methodology

The perfect candidate can anticipate technical obstacles and ensure our platform design is “future-proof”. You will have proven management experience and had repeatable successes with the solutions you have co-ordinated and deployed. You will have a deep technical understanding, whilst also being able to fully understand the bigger picture. You will be extremely organised, have superior problem-solving skills and be able to deliver results quickly. You are not afraid to experiment and fail, learn fast and adapt quickly.

What you’ll do

  • Drive and co-ordinate the development of our platform on a day to day basis, delivering real results, anticipating obstacles, and designing solutions to best address future needs
  • Co-ordinate a team of both internal and external (remotely located) developers
  • Contribute to the success of Globacap by evangelising our product, our team and our brand in various technical arenas
  • Work collaboratively across teams, including marketing, product development and customer services
  • Review the engineering structure and define a long term fit for purpose strategy
  • Assess technology options to deliver for the future
  • Inherit, manage and recruit new technical resource
  • Support teams in building out a cutting-edge platform allowing for high scalability, reliability and performance and enabling collaboration and reuse across the organisation

What you’ll bring

In order to be considered for this role, you must have prior experience as a Head of Engineering, and demonstrate:

  • A strategic, methodical approach
  • An ability to make technology simple; to be able to explain technology to non-technical staff, board members and outside parties.
  • Solid communication and presentation skills
  • Be able to prioritise tasks and use own initiative
  • Have an efficient and simplistic approach to your work
  • Act with utmost integrity and be execution driven
  • Use lateral thinking along with your own unique perspective
  • Excellent English language skills
  • A positive hands-on and “yes, we can” approach

Technologies we use

  • AWS & IBM
  • Linux
  • Ruby + React web platform
  • Hyperledger blockchain

Why work for us

You will enjoy a collaborative and supportive working environment, with a flat structure, flexible working, unlimited holidays, a personal training budget and share options.

Remuneration

  • Competitive salary
  • Share options
  • Company pension

Related:

Accelerate Innovation with Near-Instant Database Cloning

The demand for database cloning is growing exponentially as more and more companies invest in DevOps to bring new products and features to market faster. As a result, software developers, database admins, quality assurance, data scientists, analysts and operations teams all need fast access to production data copies of critical enterprise databases. Size and speed The challenge is that today’s databases are huge, ranging in size from 10 to 100 terabytes. Apart from obvious storage constraints, delay is also a factor. Moving such huge files, regardless of network speed, can take hours, sometimes even days. The … READ MORE

Related:

The Right Technology for Your Unique Small Business Demands

Small businesses need to be agile to make the most of their business opportunities. By having the latest technology, it will help get them there faster. In the United States, only 39.2% of small businesses have any full-time IT staff in-house.[1] Growing businesses are challenged to manage expenses, improve productivity, and reduce complexities across their organizations. We see you, we understand your small business challenges, and want to make sure you get the best quality servers to help you achieve your IT goals. We realize that you may not have the time/resources to focus on your … READ MORE

Related:

  • No Related Posts

Trend Micro Predicts Escalating Cloud and Supply Chain Risk

Dateline City:
DALLAS

Cyber risk increases at all layers of the corporate network as we enter a new decade

DALLAS–(BUSINESS WIRE)–Trend Micro Incorporated (TYO: 4704; TSE: 4704), a global leader in cybersecurity solutions, today announced its 2020 predictions report, which states that organizations will face a growing risk from their cloud and the supply chain. The growing popularity of cloud and DevOps environments will continue to drive business agility while exposing organizations, from enterprises to manufacturers, to third-party risk.

Language:
English

Contact:

Erin Johnson
817-522-7911
media_relations@trendmicro.com

Ticker Slug:
Ticker:
4704

Exchange:
TOKYO

ISIN:
JP3637300009

Ticker:
TMICY

Exchange:
OTC Pink

read more

Related:

Migrate from Amazon Redshift to Oracle Autonomous Data Warehouse in 7 easy steps.

In this blog, I plan to give you a quick overview of how you can use SQL Developer Amazon Redshift Migration Assistant to help you migrate your existing Amazon Redshift to Oracle Autonomous Data Warehouse (ADW)

But first, why the need to migrate to Autonomous Data Warehouse?

Data-driven organizations differentiate themselves through analytics to further their competitive advantage by extracting value from all their data sources. Today’s digital world is already creating data at an explosive rate, that organizations’ physical data warehouses that were once great for collecting data from across the enterprise for analysis are not able to keep pace with storage and compute resources needed to support them. In addition, the manual cumbersome task of patching, upgrading and securing the environments and data poses significant risks to businesses.

There are few cloud vendors that serve this niche market, one of them is Amazon Redshift, a fully managed data warehouse cloud service that is built on top of technology licensed from ParAccel. Though it is an early entrant, its query processing architecture severely limits concurrency levels, making it unsuitable for large data warehouses or web-scale data analytics. Redshift is only available for fixed blocks of hardware configurations, as such computers cannot be scaled independently of storage. This leads to excess capacity making customers pay for more than what is used. Additionally, resizing puts it in a read-only state and may require downtime, which could take hours while data is redistributed.

Oracle Autonomous Data Warehouse is a fully managed database tuned and optimized for data warehouse workloads that support both structured and unstructured data. It automatically and continuously patches, tunes, backups, and upgrades with virtually no downtime. Integrated machine-learning algorithms drive automatic caching, adaptive indexing, advanced compression, and optimized cloud data-loading delivers unrivaled performance, allowing you to quickly extract data insights and make critical decisions in real time. With little human intervention, the product virtually eliminates human error, with dramatic implications for not only minimizing security breaches and outages but also on cost. Autonomous Data Warehouse is built on latest Oracle Database software and technology that runs your existing on-premises marts, data warehouses, and applications, making it compatible with all your existing data warehouse, data integration, and BI tools.

Strategize your Data Warehouse Migration

Here is a proposed workflow for either on-demand migration of Amazon Redshift or the generation of scripts for a scheduled manual migration that can be run at a later time

Establish connections to both Amazon Redshift (Source) and Oracle Autonomous Data Warehouse (Target) using SQL Developer Migration Assistant.

Download SQL Developer 18.3 or later versions. It is a client application that can be installed on a workstation, laptop for both Windows / Mac OSX. For the purposes of this blog, we will run it on Microsoft Windows. Download Amazon Redshift JDBC driver to access Amazon Redshift Environment.

Open SQL Developer application and add Redshift JDBC driver as third-party driver (Tools > Preferences > Database > Third Party JDBC Drivers)

Add Connection to Amazon Redshift Database, in the connections panel, create new connection, select the Amazon Redshift tab and enter the connection information for Amazon Redshift.

Tip:

  • If you are planning to migrate multiple schemas it is recommended to connect with the master username to your Amazon Redshift instance.
  • If you deployed your Amazon Redshift environment within a Virtual Private Cloud (VPC) you have to ensure that your cluster is accessible from the Internet, here are the details on how to enable public Internet access.
  • If your Amazon Redshift client connection to the database appears to hang or times out when running long queries, here are the details with possible solutions to address this issue.

Add Connection to Oracle Autonomous Data Warehouse, in the connections panel, create new connection, select the Oracle tab and enter the connection information along with wallet details. If you haven’t provisioned Autonomous Data Warehouse yet, please do so now. Here are quick easy steps to get you started. You can even start with a free trial account.

Test connections for both Redshift and Autonomous Data Warehouse before you save them.

2. Capture / Map Schema: From the tools menu of SQL Developer, start the Cloud Migration Wizard to capture metadata schemas and tables from the source database (Amazon Redshift).

First, connect to AWS Redshift from the connection profile and identify the schemas that need to be migrated. All objects, mainly tables, in the schema will be migrated. You have the option to migrate data as well. Migration to Autonomous Data Warehouse is a per-schema basis and schemas cannot be renamed as part of the migration.

Note: When you migrate data, you have to provide the AWS access key, AWS Secret Access Key, and an existing S3 bucket URI where the Redshift data will be uploaded to and staged. The security credentials require privileges to store data in S3. If possible, create new, separate access keys for the migration. The same access keys will be used later to load the data into the Autonomous Data Warehouse using secure REST requests.

For example, if you provide URI as https://s3-us-west-2.amazonaws.com/my_bucket

Migration assistant will create these folders: oracle_schema_name/oracle_table_name inside the bucket: my_bucket

"https://s3-us-west 2.amazonaws.com/my_bucket/oracle_schema_name/oracle_table_name/*.gz"

Redshift Datatypes are mapped to Oracle Datatypes. Similarly, Redshift Object names are converted to Oracle names based on Oracle Naming Convention. The column defaults that use Redshift functions are replaced to their Oracle equivalents.

3. Generate Schema: Connect to Autonomous Data Warehouse from the connection profile. Ensure the user has administrative privileges, as this connection is used throughout the migration to create schemas and objects. Provide a password for the migration repository that will be created in the Autonomous Data Warehouse. You can choose to remove this repository post-migration. Specify a directory on the local system to store generated scripts necessary for the migration. To start migration right away, choose ‘Migrate Now

Use ‘Advanced Settings’ to control the formatting options, parallel threads to enable when loading data, reject limit (number of rows to reject before erroring out)during the migration

Review the summary and click ‘finish’. If you have chosen an immediate migration, then the wizard stays open until the migration is finished. If not, the migration process generates the necessary scripts in the specified local directory and does not run the scripts.

If you choose to just generate migration scripts in the local directory, then continue with the next steps.

  1. Stage Data: Connect to Amazon Redshift environment to run redshift_s3unload.sql to unload data from Redshift tables and store them to Amazon Storage S3 (staging) using the access credentials and the S3 bucket that was specified in the migration wizard workflow.
  2. Deploy Target Schema: Connect to Autonomous Data Warehouse as a privileged user (example: ADMIN) to run adwc_ddl.sql to deploy the generated schemas and DDLs converted from Amazon Redshift.
  3. Copy Data: While being connected to Autonomous Data Warehouse, run adwc_dataload.sql that contains all the load commands necessary to load data straight from S3 into your Autonomous Data Warehouse.
  4. Review Migration Results: Migration task creates 3 files in local directory; MigrationResults.log, readme.txt and redshift_migration_reportxxx.txt. Each of them will have information on the status of migration

Test few queries to make sure all your data from Amazon Redshift has migrated. Oracle Autonomous Data Warehouse supports connections from various client applications. Connect and test them.

Conclusion

With greater flexibility, lower infrastructure cost, and lower operations overhead, there’s a lot to love about Oracle Autonomous Data Warehouse. The unique value of Oracle comes from its complete cloud portfolio with intelligence infused at every layer, spanning infrastructure services, platform services, and applications. For Oracle, the autonomous enterprise goes beyond just automation, in which machines respond to an action with an automated reaction, instead, it is based on applied machine learning, making it completely autonomous, eliminating human error and delivering unprecedented performance, high security and reliability in the cloud.

Related:

Infrastructure as Code (IaC): The Next Generation of IT Automation

EMC logo


In the recent analysis of Dell EMC and VMware IT Transformation Workshops, CIOs continue to prioritize initiatives that help to accelerate delivery of software and applications for the business. The top emerging priorities for CIO’s were the desire to achieve continuous deployment (89%) and DevOps (87%) based on anonymous customer data.

For many of our Dell EMC customers, efforts to accelerate software delivery velocity and drive cloud application transformation have initially and understandably focused on developers. ‘Top down’ DevOps initiatives have focused on creating continuous delivery (CD) pipelines that eliminate the manual processes, hand-offs and bottlenecks associated with the software delivery lifecycle (SDLC) and underlying value stream. Particular focus has been placed on streamlining and automating source code and build management, integration and testing, as well as overall workflow.

As the DevOps name indicates, infrastructure and IT operations are also a critical, integral part of the story. Automating provisioning of development, test and production environments and related infrastructure is also critical to increasing overall software release velocity. Just as with application source code, infrastructure configurations can also be treated as pure code. Treating configurations as code provides the same benefits as it does for applications, including version control, automated testing, and continuous monitoring. Treating configuration as code and handling changes through CD pipelines helps prevent ‘snowflake’ infrastructure deployments that cannot be reproduced and ensure that configuration errors never make it to production.

But while the automation of provisioning with Infrastructure as Code (IaC) and pipelines is clear, many organizations to date have relied primarily on standalone automation tools and one-off scripting. While this approach certainly is an improvement over manual workflows and processes, IaC provides far more than traditional automation practices. It automates full-stack deployment of infrastructure and apps; it offers source-controlled infrastructure and packages. It introduces software development practices that are applied to infrastructure build and operate procedures; infrastructure self-monitors system configurations and infrastructure self-heals to known-good state or version.

Cloud Native IT Operating Model

To provide the AWS-like experience that developers often seek, IT organizations are finding that IaC is required for private cloud and internal CaaS, PaaS and IaaS services.  Organizations are either launching IaC initiatives that extend and leverage DevOps efforts, or in some cases are even launching pure ‘bottom-up’ IaC initiatives focused on leveraging CD pipelines to define and manage the creation, configuration, and update of infrastructure resources and services. IaC is critical to enabling IT to operate like a public cloud provider, and provide the speed, flexibility and resiliency needed to support Digital Transformation.

One of our recent Dell EMC Consulting customers in the technology services sector wanted to provide their developers a common experience across their multi-cloud environment and deliver “public-cloud responsiveness” using an on-premises converged infrastructure solution. The key desired outcomes from their DevOps / IAC initiative was to minimize the inconsistency when building infrastructure components, while improving the efficiency of deploying both the cloud and on-premises infrastructure. As with most DevOps / IAC transformation programs, driving culture and behavior change was a key priority. The customer was seeking to cultivate internal knowledge and practical experience with Infrastructure-as-Code and DevOps concepts & tools and transform disparate client teams into one that follows Infrastructure-as-Code and DevOps behaviors.

Our Dell EMC Consulting team worked with the customer to use Infrastructure-as-Code and DevOps methodologies to architect & automate the deployment of high-performance converged infrastructure platform, and to develop a customer fulfillment pipeline for provisioning of both cloud and on-premises infrastructure resources including compute, storage and networking. Our Dell EMC Consulting team also provided coaching and mentoring that enabled the customer to enable a pipeline-driven Cloud platform for IaaS (and eventually PaaS & CaaS).

As a result of their DevOps / IAC engagement with Dell EMC Consulting the customer was able to:

  • Accelerate the fulfillment of Infrastructure to platform teams regardless of public cloud or on-premises requirements, and deliver IaaS using Infrastructure-as-Code and CD tool chain at end of sprints.
  • Provide resilient on-premises Cloud Platform in place for VM & Container services.
  • Enable optimized, automated flow, cutting provision time for developers.
  • Transform disparate internal teams into one integrating an Infrastructure-as-Code and DevOps foundation and pipeline first discipline.

Critical to the success of this and many of our other customers is recognizing the central role that CD pipelines and treating infrastructure configuration as code can play in infrastructure automation.

Summary

We’d love to hear about the challenges you face on your DevOps / IAC transformation journey, see more information on our Dell EMC DevOps and IaC Consulting services.

The post Infrastructure as Code (IaC): The Next Generation of IT Automation appeared first on InFocus Blog | Dell EMC Services.


Update your feed preferences


   

   


   


   

submit to reddit
   

Related: