Job Detail

Senior Software Engineer

Inseriert am: 08.08.2018

Senior Software Engineer


Location: Bern, Bern CH


Apply


Requisition Number: 201010


Position Title: TB Sr. Data Eng (I)


External Description:


We are looking for a 



Senior Software Engineer (m/f)
in Bern  


Primary Responsibilities:


As a Senior Software Engineer, you will provide technical leadership to clients in a team that designs and develops path-breaking large-scale cluster data processing systems. You will mentor sophisticated organizations on large scale data and analytics and work with client teams to deliver results.


Additionally, as a senior member of our consulting team, you will help Teradata to establish thought leadership in the big data space by contributing white papers, technical commentary and representing our company at industry conferences.


Secondary Responsibilities:


Design and develop code, scripts and data processing systems that leverage structured and unstructured data integrated from multiple sources. Software installation and configuration. Participate in and help lead requirements and design workshops with our clients. Develop project deliverable documentation. Lead small teams of developers and coordinate development activities. Mentor junior members of the team in software development best practices. Other duties as assigned.


Job Qualifications:



  • Proven expertise in production software development

  • 7+ years of experience programming in Java, Python, SQL

  • Capable of operationalizing software with a heavy emphasis on automation and DevOps best-practices

  • Proficient in SQL, NoSQL, relational database design and methods for efficiently retrieving data

  • Strong analytical skills

  • Creative problem solver

  • Excellent verbal and written communications skills

  • Strong team player capable of working in a demanding start-up environment

  • Experience building complex and non-interactive systems (batch, distributed, etc.)


Preferred Knowledge, Skills and Abilities: 



  • Prior consulting experience

  • Experience designing and tuning high-performance systems

  • Prior working experience with Docker, Kubernetes, Mesos or Red Hat Open Shift

  • Working experience with dynamic and/or functional languages (e.g., Python, Scala, Clojure, Ruby)

  • Developing on Cloud platforms such as AWS 

  • Familiar with Data Science fundamentals e.g. prior experience in operationalizing Machine Learning models in production environments

  • Experience with deep and machine learning concepts and frameworks ideally Big data experience with Apache Spark, Hadoop, HDFS, Hive, HBase

  • Experience with Data Science Workbenches like Jupyter, Jupyter Hub and Zeppelin

  • Experience with Serialization formats like Avro, Thrift and Google Protobuf

  • Experience with Message Brokers Systems like Kafka, ActiveMQ, RabbitMQ etc.

  • Familiar with concepts around Stream Processing, Complex Event Processing or Event Sourcing paradigms

  • Familiar with OSI stack and proper use of HTTP verbs etc.

  • Prior experience with data warehousing and business intelligence systems as well as Linux expertise

  • Prior work and/or research experience with unstructured data and data modeling

  • Familiarity with different development methodologies (e.g., agile, waterfall, XP, scrum, etc.)

  • Broader experience with the Spring ecosystem including spring-batch, spring-mvc, and spring-hadoop or spring-dataflow

  • Experience in building standards-based REST implementations

  • Have the ability to configure a Jenkins build, create/update a Jira ticket and enable Automated Tests in gradle/maven build

  • Experience with Version Control Systems in multi-project setups (Git, Mercurial, SVN)

  • Knows how to tune Hive. Knowledge about Hive physical design (file formats, compression, partitioning, bucketing), Hive Queries, Hive for Data Science, Hive Transactions, Hive-HBase

  • Knows how to tune a job including parameters or more efficient API calls. Understands Spark. Understands Spark Streaming and can transform a DStream.

  • Ability to write advanced UDFs, Serdes, input-loaders, perform log analysis and knows how the logical operators map to the lower, level physical implementation

  • Understanding of best practices for Hive schemas as well Denormalization, partitioning and bucketing, file formats.

  • Able to create, write to and read from Kafka topic. Understanding of key partitioning (just how it works), able to maintain an offset in the topic for consistent reading


Job Abilities:



  • Willingness to travel to client sites up to 50% of the time

  • Ability to write programming code in applicable languages

  • Excellent verbal and written communication in English, German as plus preferred

  • Ability to present technical concepts to both technical and non-technical groups


Education:


Bachelor's/Master’s Degree or foreign equivalent in Computer Science or related technical field (Mathematics, Statistics, Machine Learning and Data Mining) followed by four to six years of progressively responsible professional experience programming in Java, Python, Scala or C/C++.


Who fits in with us


At Teradata you will not find rigid hierarchies or detailed specifications. The best fit are colleagues who use leeway responsibly, both in terms of content and time. Who are not afraid to ask questions, introduce their own ideas and are open for new perspectives. Strong team spirit and independent work are not a contradiction for us. What counts are the contributions that you make to our joint success. All our employees share our enthusiasm for the possibilities of data-driven decisions and solutions.


 


CountryEEOText_Description:


City: Bern


State: Bern


Community / Marketing Title: Senior Software Engineer


Company Profile:


With all the investments made in analytics, it’s time to stop buying into partial solutions that overpromise and underdeliver. It’s time to invest in answers. Only Teradata leverages all of the data, all of the time, so that  customers can analyze anything, deploy anywhere, and deliver analytics that matter most to them. And we do it at scale, on-premises, in the Cloud, or anywhere in between.


We call this Pervasive Data Intelligence. It’s the answer to the complexity, cost, and inadequacy of today’s analytics. And it's the way Teradata transforms how businesses work and people live through the power of data throughout the world. Join us and help create the era of Pervasive Data Intelligence.


Location_formattedLocationLong: Bern, Bern CH

Details