<!-- 
RSS generated by JIRA (9.4.5#940005-sha1:e3094934eac4fd8653cf39da58f39364fb9cc7c1) at Sat Feb 10 06:01:25 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>Akraino JIRA</title>
    <link>https://jira.akraino.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.4.5</version>
        <build-number>940005</build-number>
        <build-date>11-04-2023</build-date>
    </build-info>


<item>
            <title>[ICN-615] NS Lookup Failure when Nodus is a Primary CNI</title>
                <link>https://jira.akraino.org/browse/ICN-615</link>
                <project id="10400" key="ICN">Integrated Cloud Native NFV</project>
                    <description>&lt;p&gt;The Nodus is the primary CNI.&#160;&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;When the POD subnet and the ovn-controller-network configmap are set to 10.244.0.0/16, the DNS lookup fails with the following error messages,&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;$ kubectl exec -it dnsutils &amp;#8211; nslookup kubernetes.default&lt;/p&gt;

&lt;p&gt;;; reply from unexpected source: 10.244.0.3#53, expected 10.96.0.10#53&lt;/p&gt;

&lt;p&gt;;; reply from unexpected source: 10.244.0.3#53, expected 10.96.0.10#53&lt;/p&gt;

&lt;p&gt;;; reply from unexpected source: 10.244.0.3#53, expected 10.96.0.10#53&lt;/p&gt;

&lt;p&gt;;; connection timed out; no servers could be reached&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;But when the POD subnet &#160;and the ovn-controller-network are set to 10.158.142.0/18 (the default value configured in the ovn-controller-network, the dns seems to work fine&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;$ kubectl exec -it dnsutils &amp;#8211; nslookup kubernetes.default&lt;/p&gt;

&lt;p&gt;Server:&#160;&#160;&#160;&#160;&#160;&#160;&#160;&#160; 10.96.0.10&lt;/p&gt;

&lt;p&gt;Address:&#160;&#160;&#160;&#160;&#160;&#160;&#160; 10.96.0.10#53&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Name:&#160;&#160; kubernetes.default.svc.cluster.local&lt;/p&gt;

&lt;p&gt;Address: 10.96.0.1&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;&#160;But even in the second case above (where the DNS works fine) the emco-monitor errors out and getting restarted continuously on a multi-node cluster. Whereas it works fine in the single node cluster.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Is there any network configuration missing in the host?&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</description>
                <environment></environment>
        <key id="11908">ICN-615</key>
            <summary>NS Lookup Failure when Nodus is a Primary CNI</summary>
                <type id="10004" iconUrl="https://jira.akraino.org/secure/viewavatar?size=xsmall&amp;avatarId=10303&amp;avatarType=issuetype">Bug</type>
                                            <priority id="2" iconUrl="https://jira.akraino.org/images/icons/priorities/high.svg">High</priority>
                        <status id="10000" iconUrl="https://jira.akraino.org/" description="">To Do</status>
                    <statusCategory id="2" key="new" colorName="default"/>
                                    <resolution id="-1">Unresolved</resolution>
                                        <assignee username="r.kuralamudhan">Kuralamudhan Ramakrishnan</assignee>
                                    <reporter username="palaniap">Palaniappan Ram</reporter>
                        <labels>
                    </labels>
                <created>Thu, 28 Oct 2021 16:31:58 +0000</created>
                <updated>Sat, 6 Nov 2021 16:46:25 +0000</updated>
                                                                                <due></due>
                            <votes>0</votes>
                                    <watches>3</watches>
                                                                                                                <comments>
                            <comment id="11800" author="saddepalli" created="Sat, 6 Nov 2021 16:46:25 +0000"  >&lt;p&gt;Just to ensure that there are no stale files causing the issue, is it possible to create fresh VMs and try installing K8s on top of it.&lt;/p&gt;</comment>
                            <comment id="11709" author="JIRAUSER11411" created="Fri, 29 Oct 2021 01:55:28 +0000"  >&lt;p&gt;When switching the podCIDR, I removed the kubernetes with&#160;&lt;/p&gt;

&lt;p&gt;kubeadm reset -f&lt;/p&gt;

&lt;p&gt;iptables -F&lt;/p&gt;

&lt;p&gt;iptables -X&lt;/p&gt;

&lt;p&gt;before reinstallation, on all the nodes.&#160;&lt;/p&gt;

&lt;p&gt;There may still be stale config files in the host which need to be removed.&lt;/p&gt;</comment>
                            <comment id="11708" author="r.kuralamudhan" created="Thu, 28 Oct 2021 17:57:04 +0000"  >&lt;p&gt;We have to do kubeadm reset and IPtable flush, when we change the pod network subnet changes in the kubeadm control plane to clean the setup IPtables already assigned service IP with pod network cidr range.&#160;&lt;/p&gt;</comment>
                            <comment id="11707" author="r.kuralamudhan" created="Thu, 28 Oct 2021 17:33:16 +0000"  >&lt;p&gt;I don&apos;t think there is hardcoding on these, we have made them as configmap. But there could be case where the ovn resource is pointing to&#160;10.158.142.0/18 even after changing the&#160;10.244.0.0/16 and restarting the nfn plugins. I delete all the ovn resource including nfn plugins and ovn-daemoenset and ovnfolder before changing the subnet. I think, I should document these and add them in deletion hooks when deleting the containers. But, we have to debug the Palani setup to understand more on this issue.&lt;/p&gt;</comment>
                            <comment id="11706" author="saddepalli" created="Thu, 28 Oct 2021 17:04:49 +0000"  >&lt;p&gt;Some hardcoding somewhere?&lt;/p&gt;</comment>
                            <comment id="11705" author="r.kuralamudhan" created="Thu, 28 Oct 2021 16:48:58 +0000"  >&lt;p&gt;I see some issue with way the subnet changes happening here. we have to delete the ovn resource before changing the subnet.&#160;&lt;/p&gt;

&lt;p&gt;Subnet change should not cause issue dns. Some steps is missing in this case.&#160;Let debug today on this.&#160;&#160;&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                        <customfield id="customfield_10000" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    <customfield id="customfield_10105" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>0|i005mk:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    </customfields>
    </item>
</channel>
</rss>